Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 12 Sep 2025 at 01:41 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2025-09-09
CmpDate: 2025-09-09
Smart load balancing in cloud computing: Integrating feature selection with advanced deep learning models.
PloS one, 20(9):e0329765 pii:PONE-D-24-52330.
The increasing dependence on cloud computing as a cornerstone of modern technological infrastructures has introduced significant challenges in resource management. Traditional load-balancing techniques often prove inadequate in addressing cloud environments' dynamic and complex nature, resulting in suboptimal resource utilization and heightened operational costs. This paper presents a novel smart load-balancing strategy incorporating advanced techniques to mitigate these limitations. Specifically, it addresses the critical need for a more adaptive and efficient approach to workload management in cloud environments, where conventional methods fall short in handling dynamic and fluctuating workloads. To bridge this gap, the paper proposes a hybrid load-balancing methodology that integrates feature selection and deep learning models for optimizing resource allocation. The proposed Smart Load Adaptive Distribution with Reinforcement and Optimization approach, SLADRO, combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) algorithms for load prediction, a hybrid bio-inspired optimization technique-Orthogonal Arrays and Particle Swarm Optimization (OOA-PSO)-for feature selection algorithms, and Deep Reinforcement Learning (DRL) for dynamic task scheduling. Extensive simulations conducted on a real-world dataset called Google Cluster Trace dataset reveal that the SLADRO model significantly outperforms traditional load-balancing approaches, yielding notable improvements in throughput, makespan, resource utilization, and energy efficiency. This integration of advanced techniques offers a scalable and adaptive solution, providing a comprehensive framework for efficient load balancing in cloud computing environments.
Additional Links: PMID-40924788
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40924788,
year = {2025},
author = {Sanjalawe, Y and Fraihat, S and Al-E'mari, S and Abualhaj, M and Makhadmeh, S and Alzubi, E},
title = {Smart load balancing in cloud computing: Integrating feature selection with advanced deep learning models.},
journal = {PloS one},
volume = {20},
number = {9},
pages = {e0329765},
doi = {10.1371/journal.pone.0329765},
pmid = {40924788},
issn = {1932-6203},
mesh = {*Cloud Computing ; *Deep Learning ; Algorithms ; Neural Networks, Computer ; *Workload ; Humans ; },
abstract = {The increasing dependence on cloud computing as a cornerstone of modern technological infrastructures has introduced significant challenges in resource management. Traditional load-balancing techniques often prove inadequate in addressing cloud environments' dynamic and complex nature, resulting in suboptimal resource utilization and heightened operational costs. This paper presents a novel smart load-balancing strategy incorporating advanced techniques to mitigate these limitations. Specifically, it addresses the critical need for a more adaptive and efficient approach to workload management in cloud environments, where conventional methods fall short in handling dynamic and fluctuating workloads. To bridge this gap, the paper proposes a hybrid load-balancing methodology that integrates feature selection and deep learning models for optimizing resource allocation. The proposed Smart Load Adaptive Distribution with Reinforcement and Optimization approach, SLADRO, combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) algorithms for load prediction, a hybrid bio-inspired optimization technique-Orthogonal Arrays and Particle Swarm Optimization (OOA-PSO)-for feature selection algorithms, and Deep Reinforcement Learning (DRL) for dynamic task scheduling. Extensive simulations conducted on a real-world dataset called Google Cluster Trace dataset reveal that the SLADRO model significantly outperforms traditional load-balancing approaches, yielding notable improvements in throughput, makespan, resource utilization, and energy efficiency. This integration of advanced techniques offers a scalable and adaptive solution, providing a comprehensive framework for efficient load balancing in cloud computing environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Deep Learning
Algorithms
Neural Networks, Computer
*Workload
Humans
RevDate: 2025-09-09
Warp Analysis Research Pipelines: Cloud-optimized workflows for biological data processing and reproducible analysis.
Bioinformatics (Oxford, England) pii:8250097 [Epub ahead of print].
SUMMARY: In the era of large data, the cloud is increasingly used as a computing environment, necessitating the development of cloud-compatible pipelines that can provide uniform analysis across disparate biological datasets. The Warp Analysis Research Pipelines (WARP) repository is a GitHub repository of open-source, cloud-optimized workflows for biological data processing that are semantically versioned, tested, and documented. A companion repository, WARP-Tools, hosts Docker containers and custom tools used in WARP workflows.
The WARP and WARP-Tools repositories and code are freely available at https://github.com/broadinstitute/WARP and https://github.com/broadinstitute/WARP-tools, respectively. The pipelines are available for download from the WARP repository, can be exported from Dockstore, and can be imported to a bioinformatics platform such as Terra.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Additional Links: PMID-40924537
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40924537,
year = {2025},
author = {Degatano, K and Awdeh, A and Cox Iii, RS and Dingman, W and Grant, G and Khajouei, F and Kiernan, E and Konwar, K and Mathews, KL and Palis, K and Petrillo, N and Van der Auwera, G and Wang, CR and Way, J},
title = {Warp Analysis Research Pipelines: Cloud-optimized workflows for biological data processing and reproducible analysis.},
journal = {Bioinformatics (Oxford, England)},
volume = {},
number = {},
pages = {},
doi = {10.1093/bioinformatics/btaf494},
pmid = {40924537},
issn = {1367-4811},
abstract = {SUMMARY: In the era of large data, the cloud is increasingly used as a computing environment, necessitating the development of cloud-compatible pipelines that can provide uniform analysis across disparate biological datasets. The Warp Analysis Research Pipelines (WARP) repository is a GitHub repository of open-source, cloud-optimized workflows for biological data processing that are semantically versioned, tested, and documented. A companion repository, WARP-Tools, hosts Docker containers and custom tools used in WARP workflows.
The WARP and WARP-Tools repositories and code are freely available at https://github.com/broadinstitute/WARP and https://github.com/broadinstitute/WARP-tools, respectively. The pipelines are available for download from the WARP repository, can be exported from Dockstore, and can be imported to a bioinformatics platform such as Terra.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.},
}
RevDate: 2025-09-08
Simulation-based assessment of digital twin systems for immunisation.
Frontiers in digital health, 7:1603550.
BACKGROUND: This paper presents the application of simulation to assess the functionality of a proposed Digital Twin (DT) architecture for immunisation services in primary healthcare centres. The solution is based on Industry 4.0 concepts and technologies, such as IoT, machine learning, and cloud computing, and adheres to the ISO 23247 standard.
METHODS: The system modelling is carried out using the Unified Modelling Language (UML) to define the workflows and processes involved, including vaccine storage temperature monitoring and population vaccination status tracking. The proposed architecture is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain. To validate the system's performance and feasibility, simulations are conducted using SimPy, enabling the evaluation of its response under various operational scenarios.
RESULTS: The system facilitates the storage, monitoring, and visualisation of data related to the thermal conditions of ice-lined refrigerators (ILR) and thermal boxes. Additionally, it analyses patient vaccination coverage based on the official immunisation schedule. The key benefits include optimising vaccine storage conditions, reducing dose wastage, continuously monitoring immunisation coverage, and supporting strategic vaccination planning.
CONCLUSION: The paper discusses the future impacts of this approach on immunisation management and its scalability for diverse public health contexts. By leveraging advanced technologies and simulation, this digital twin framework aims to improve the performance and overall impact of immunization services.
Additional Links: PMID-40919327
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40919327,
year = {2025},
author = {El-Warrak, LO and Miceli de Farias, C and De Azevedo Costa, VHDM},
title = {Simulation-based assessment of digital twin systems for immunisation.},
journal = {Frontiers in digital health},
volume = {7},
number = {},
pages = {1603550},
pmid = {40919327},
issn = {2673-253X},
abstract = {BACKGROUND: This paper presents the application of simulation to assess the functionality of a proposed Digital Twin (DT) architecture for immunisation services in primary healthcare centres. The solution is based on Industry 4.0 concepts and technologies, such as IoT, machine learning, and cloud computing, and adheres to the ISO 23247 standard.
METHODS: The system modelling is carried out using the Unified Modelling Language (UML) to define the workflows and processes involved, including vaccine storage temperature monitoring and population vaccination status tracking. The proposed architecture is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain. To validate the system's performance and feasibility, simulations are conducted using SimPy, enabling the evaluation of its response under various operational scenarios.
RESULTS: The system facilitates the storage, monitoring, and visualisation of data related to the thermal conditions of ice-lined refrigerators (ILR) and thermal boxes. Additionally, it analyses patient vaccination coverage based on the official immunisation schedule. The key benefits include optimising vaccine storage conditions, reducing dose wastage, continuously monitoring immunisation coverage, and supporting strategic vaccination planning.
CONCLUSION: The paper discusses the future impacts of this approach on immunisation management and its scalability for diverse public health contexts. By leveraging advanced technologies and simulation, this digital twin framework aims to improve the performance and overall impact of immunization services.},
}
RevDate: 2025-09-08
Cloud-magnetic resonance imaging system: In the era of 6G and artificial intelligence.
Magnetic resonance letters, 5(1):200138.
Magnetic resonance imaging (MRI) plays an important role in medical diagnosis, generating petabytes of image data annually in large hospitals. This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure. Additionally, local data processing demands substantial manpower and hardware investments. Data isolation across different healthcare institutions hinders cross-institutional collaboration in clinics and research. In this work, we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing, 6G bandwidth, edge computing, federated learning, and blockchain technology. This system is called Cloud-MRI, aiming at solving the problems of MRI data storage security, transmission speed, artificial intelligence (AI) algorithm maintenance, hardware upgrading, and collaborative work. The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data (ISMRMRD) format. Then, the data are uploaded to the cloud or edge nodes for fast image reconstruction, neural network training, and automatic analysis. Then, the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services. The Cloud-MRI system will save the raw imaging data, reduce the risk of data loss, facilitate inter-institutional medical collaboration, and finally improve diagnostic accuracy and work efficiency.
Additional Links: PMID-40918039
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40918039,
year = {2025},
author = {Zhou, Y and Wu, Y and Su, Y and Li, J and Cai, J and You, Y and Zhou, J and Guo, D and Qu, X},
title = {Cloud-magnetic resonance imaging system: In the era of 6G and artificial intelligence.},
journal = {Magnetic resonance letters},
volume = {5},
number = {1},
pages = {200138},
pmid = {40918039},
issn = {2772-5162},
abstract = {Magnetic resonance imaging (MRI) plays an important role in medical diagnosis, generating petabytes of image data annually in large hospitals. This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure. Additionally, local data processing demands substantial manpower and hardware investments. Data isolation across different healthcare institutions hinders cross-institutional collaboration in clinics and research. In this work, we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing, 6G bandwidth, edge computing, federated learning, and blockchain technology. This system is called Cloud-MRI, aiming at solving the problems of MRI data storage security, transmission speed, artificial intelligence (AI) algorithm maintenance, hardware upgrading, and collaborative work. The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data (ISMRMRD) format. Then, the data are uploaded to the cloud or edge nodes for fast image reconstruction, neural network training, and automatic analysis. Then, the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services. The Cloud-MRI system will save the raw imaging data, reduce the risk of data loss, facilitate inter-institutional medical collaboration, and finally improve diagnostic accuracy and work efficiency.},
}
RevDate: 2025-09-05
[Development and practice of an interactive chromatography learning tool for beginners based on GeoGebra: a case study of plate theory].
Se pu = Chinese journal of chromatography, 43(9):1078-1085.
This study developed a GeoGebra platform-based interactive pedagogical tool focusing on plate theory to address challenges associated with abstract theory transmission, unidirectional knowledge delivery, and low student engagement in chromatography teaching in instrumental analysis courses. This study introduced an innovative methodology that encompasses theoretical model reconstruction, tool development, and teaching-chain integration that addresses the limitations of existing teaching tools, including the complex operation of professional software, restricted accessibility to web-based tools, and insufficient parameter-adjustment flexibility. An improved mathematical plate-theory model was established by incorporating mobile-phase flow rate, dead time, and phase ratio parameters. A three-tier progressive learning system (single-component simulation, multi-component simulation, and retention-time-equation derivation modules) was developed on a cloud-based computing platform. An integrated teaching chain that combined athematical modeling (AI-assisted "Doubao" derivation), interactive-parameter adjustment (multiple adjustable chromatographic parameters), and visual verification (chromatographic elution-curve simulation) was implemented. Teaching practice demonstrated that: (1) The developed tool transcends the dimensional limitations of traditional instruction, elevating the classroom task completion rate to 94% and improving the student accuracy rate for solving advanced problems to 76%. (2) The dynamic-parameter-adjustment feature significantly enhances learning engagement by enabling 85% of the students to independently use the tool in subsequent studies and experiments. (3) The AI-powered derivation and regression-analysis modules enable the interdisciplinary integration of theoretical chemistry and computational tools. The process of deriving chromatographic retention-time equations through this methodological approach proved more convincing than the current textbook practice of directly presenting conclusions. The developed innovative "theoretical-model visualizable-model-parameter adjustable-interactive-knowledge generating" model provides a new avenue for addressing teaching challenges associated with chromatography theory, and its open-source framework and modular design philosophy can offer valuable references for the digital teaching reform in analytical chemistry.
Additional Links: PMID-40910315
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40910315,
year = {2025},
author = {Zhang, YH and He, JY and Lin, SJ and Ai, BJ and Shi, ZH and Zhang, HY},
title = {[Development and practice of an interactive chromatography learning tool for beginners based on GeoGebra: a case study of plate theory].},
journal = {Se pu = Chinese journal of chromatography},
volume = {43},
number = {9},
pages = {1078-1085},
pmid = {40910315},
issn = {1872-2059},
abstract = {This study developed a GeoGebra platform-based interactive pedagogical tool focusing on plate theory to address challenges associated with abstract theory transmission, unidirectional knowledge delivery, and low student engagement in chromatography teaching in instrumental analysis courses. This study introduced an innovative methodology that encompasses theoretical model reconstruction, tool development, and teaching-chain integration that addresses the limitations of existing teaching tools, including the complex operation of professional software, restricted accessibility to web-based tools, and insufficient parameter-adjustment flexibility. An improved mathematical plate-theory model was established by incorporating mobile-phase flow rate, dead time, and phase ratio parameters. A three-tier progressive learning system (single-component simulation, multi-component simulation, and retention-time-equation derivation modules) was developed on a cloud-based computing platform. An integrated teaching chain that combined athematical modeling (AI-assisted "Doubao" derivation), interactive-parameter adjustment (multiple adjustable chromatographic parameters), and visual verification (chromatographic elution-curve simulation) was implemented. Teaching practice demonstrated that: (1) The developed tool transcends the dimensional limitations of traditional instruction, elevating the classroom task completion rate to 94% and improving the student accuracy rate for solving advanced problems to 76%. (2) The dynamic-parameter-adjustment feature significantly enhances learning engagement by enabling 85% of the students to independently use the tool in subsequent studies and experiments. (3) The AI-powered derivation and regression-analysis modules enable the interdisciplinary integration of theoretical chemistry and computational tools. The process of deriving chromatographic retention-time equations through this methodological approach proved more convincing than the current textbook practice of directly presenting conclusions. The developed innovative "theoretical-model visualizable-model-parameter adjustable-interactive-knowledge generating" model provides a new avenue for addressing teaching challenges associated with chromatography theory, and its open-source framework and modular design philosophy can offer valuable references for the digital teaching reform in analytical chemistry.},
}
RevDate: 2025-09-02
Enhanced secure storage and data privacy management system for big data based on multilayer model.
Scientific reports, 15(1):32285.
As big data systems expand in scale and complexity, managing and securing sensitive data-especially personnel records-has become a critical challenge in cloud environments. This paper proposes a novel Multi-Layer Secure Cloud Storage Model (MLSCSM) tailored for large-scale personnel data. The model integrates fast and secure ChaCha20 encryption, Dual Stage Data Partitioning (DSDP) to maintain statistical reliability across blocks, k-anonymization to ensure privacy, SHA-512 hashing for data integrity, and Cauchy matrix-based dispersion for fault-tolerant distributed storage. A key novelty lies in combining cryptographic and statistical methods to enable privacy-preserving partitioned storage, optimized for distributed Cloud Computing Environments (CCE). Data blocks are securely encoded, masked, and stored in discrete locations across several cloud platforms, based on factors such as latency, bandwidth, cost, and security. They are later retrieved with integrity verification. The model also includes audit logs, load balancing, and real-time resource evaluation. To validate the system, experiments were tested using the MIMIC-III dataset on a 20-node Hadoop cluster. Compared to baseline models such as RDFA, SDPMC, and P&XE, the proposed model achieved a reduction in encoding time to 250 ms (block size 75), a CPU usage of 23% for 256 MB of data, a latency as low as 14 ms, and a throughput of up to 139 ms. These results confirm that the model offers superior security, efficiency, and scalability for cloud-based big data storage applications.
Additional Links: PMID-40897813
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40897813,
year = {2025},
author = {Ting, T and Li, M},
title = {Enhanced secure storage and data privacy management system for big data based on multilayer model.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {32285},
pmid = {40897813},
issn = {2045-2322},
abstract = {As big data systems expand in scale and complexity, managing and securing sensitive data-especially personnel records-has become a critical challenge in cloud environments. This paper proposes a novel Multi-Layer Secure Cloud Storage Model (MLSCSM) tailored for large-scale personnel data. The model integrates fast and secure ChaCha20 encryption, Dual Stage Data Partitioning (DSDP) to maintain statistical reliability across blocks, k-anonymization to ensure privacy, SHA-512 hashing for data integrity, and Cauchy matrix-based dispersion for fault-tolerant distributed storage. A key novelty lies in combining cryptographic and statistical methods to enable privacy-preserving partitioned storage, optimized for distributed Cloud Computing Environments (CCE). Data blocks are securely encoded, masked, and stored in discrete locations across several cloud platforms, based on factors such as latency, bandwidth, cost, and security. They are later retrieved with integrity verification. The model also includes audit logs, load balancing, and real-time resource evaluation. To validate the system, experiments were tested using the MIMIC-III dataset on a 20-node Hadoop cluster. Compared to baseline models such as RDFA, SDPMC, and P&XE, the proposed model achieved a reduction in encoding time to 250 ms (block size 75), a CPU usage of 23% for 256 MB of data, a latency as low as 14 ms, and a throughput of up to 139 ms. These results confirm that the model offers superior security, efficiency, and scalability for cloud-based big data storage applications.},
}
RevDate: 2025-09-01
CRFTS: a cluster-centric and reservation-based fault-tolerant scheduling strategy to enhance QoS in cloud computing.
Scientific reports, 15(1):32233.
Cloud systems supply different kinds of on-demand services in accordance with client needs. As the landscape of cloud computing undergoes continuous development, there is a growing imperative for effective utilization of resources, task scheduling, and fault tolerance mechanisms. To decrease the user task execution time (shorten the makespan) with reduced operational expenses, to improve the distribution of load, and to boost utilization of resources, proper mapping of user tasks to the available VMs is necessary. This study introduces a unique perspective in tackling these challenges by implementing inventive scheduling strategies along with robust and proactive fault tolerance mechanisms in cloud environments. This paper presents the Clustering and Reservation Fault-tolerant Scheduling (CRFTS), which adapts the heartbeat mechanism to detect failed VMs proactively and maximizes the system reliability while making it fault-tolerant and optimizing other Quality of Service (QoS) parameters, such as makespan, average resource utilization, and reliability. The study optimizes the allocation of tasks to improve resource utilization and reduce the time required for their completion. At the same time, the proactive reservation-based fault tolerance framework is presented to ensure continuous service delivery throughout its execution without any interruption. The effectiveness of the suggested model is illustrated through simulations and empirical analyses, highlighting enhancements in several QoS parameters while comparing with HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO for various cases and conditions across different tasks and VMs. The outcomes demonstrate that CRFTS average progresses about 48.7%, 51.2%, 45.4%, 11.8%, 24.5%, 24.4% in terms of makespan and 13.1%, 9.3%, 6.5%, 21%, 22.1%, 26.3% in terms of average resource utilization compared to HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO, respectively.
Additional Links: PMID-40890227
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40890227,
year = {2025},
author = {Mushtaq, SU and Sheikh, S and Nain, A and Bharany, S and Ghoniem, RM and Taye, BM},
title = {CRFTS: a cluster-centric and reservation-based fault-tolerant scheduling strategy to enhance QoS in cloud computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {32233},
pmid = {40890227},
issn = {2045-2322},
abstract = {Cloud systems supply different kinds of on-demand services in accordance with client needs. As the landscape of cloud computing undergoes continuous development, there is a growing imperative for effective utilization of resources, task scheduling, and fault tolerance mechanisms. To decrease the user task execution time (shorten the makespan) with reduced operational expenses, to improve the distribution of load, and to boost utilization of resources, proper mapping of user tasks to the available VMs is necessary. This study introduces a unique perspective in tackling these challenges by implementing inventive scheduling strategies along with robust and proactive fault tolerance mechanisms in cloud environments. This paper presents the Clustering and Reservation Fault-tolerant Scheduling (CRFTS), which adapts the heartbeat mechanism to detect failed VMs proactively and maximizes the system reliability while making it fault-tolerant and optimizing other Quality of Service (QoS) parameters, such as makespan, average resource utilization, and reliability. The study optimizes the allocation of tasks to improve resource utilization and reduce the time required for their completion. At the same time, the proactive reservation-based fault tolerance framework is presented to ensure continuous service delivery throughout its execution without any interruption. The effectiveness of the suggested model is illustrated through simulations and empirical analyses, highlighting enhancements in several QoS parameters while comparing with HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO for various cases and conditions across different tasks and VMs. The outcomes demonstrate that CRFTS average progresses about 48.7%, 51.2%, 45.4%, 11.8%, 24.5%, 24.4% in terms of makespan and 13.1%, 9.3%, 6.5%, 21%, 22.1%, 26.3% in terms of average resource utilization compared to HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO, respectively.},
}
RevDate: 2025-09-01
AI-Integrated autonomous robotics for solar panel cleaning and predictive maintenance using drone and ground-based systems.
Scientific reports, 15(1):32187.
Solar photovoltaic (PV) systems, especially in dusty and high-temperature regions, suffer performance degradation due to dust accumulation, surface heating, and delayed maintenance. This study proposes an AI-integrated autonomous robotic system combining real-time monitoring, predictive analytics, and intelligent cleaning for enhanced solar panel performance. We developed a hybrid system that integrates CNN-LSTM-based fault detection, Reinforcement Learning (DQN)-driven robotic cleaning, and Edge AI analytics for low-latency decision-making. Thermal and LiDAR-equipped drones detect panel faults, while ground robots clean panel surfaces based on real-time dust and temperature data. The system is built on Jetson Nano and Raspberry Pi 4B units with MQTT-based IoT communication. The system achieved an average cleaning efficiency of 91.3%, reducing dust density from 3.9 to 0.28 mg/m[3], and restoring up to 31.2% energy output on heavily soiled panels. CNN-LSTM-based fault detection delivered 92.3% accuracy, while the RL-based cleaning policy reduced energy and water consumption by 34.9%. Edge inference latency averaged 47.2 ms, outperforming cloud processing by 63%. A strong correlation, r = 0.87 between dust concentration and thermal anomalies, was confirmed. The proposed IEEE 1876-compliant framework offers a resilient and intelligent solution for real-time solar panel maintenance. By leveraging AI, robotics, and edge computing, the system enhances energy efficiency, reduces manual labor, and provides a scalable model for climate-resilient, smart solar infrastructure.
Additional Links: PMID-40890211
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40890211,
year = {2025},
author = {Kishor, I and Mamodiya, U and Patil, V and Naik, N},
title = {AI-Integrated autonomous robotics for solar panel cleaning and predictive maintenance using drone and ground-based systems.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {32187},
pmid = {40890211},
issn = {2045-2322},
abstract = {Solar photovoltaic (PV) systems, especially in dusty and high-temperature regions, suffer performance degradation due to dust accumulation, surface heating, and delayed maintenance. This study proposes an AI-integrated autonomous robotic system combining real-time monitoring, predictive analytics, and intelligent cleaning for enhanced solar panel performance. We developed a hybrid system that integrates CNN-LSTM-based fault detection, Reinforcement Learning (DQN)-driven robotic cleaning, and Edge AI analytics for low-latency decision-making. Thermal and LiDAR-equipped drones detect panel faults, while ground robots clean panel surfaces based on real-time dust and temperature data. The system is built on Jetson Nano and Raspberry Pi 4B units with MQTT-based IoT communication. The system achieved an average cleaning efficiency of 91.3%, reducing dust density from 3.9 to 0.28 mg/m[3], and restoring up to 31.2% energy output on heavily soiled panels. CNN-LSTM-based fault detection delivered 92.3% accuracy, while the RL-based cleaning policy reduced energy and water consumption by 34.9%. Edge inference latency averaged 47.2 ms, outperforming cloud processing by 63%. A strong correlation, r = 0.87 between dust concentration and thermal anomalies, was confirmed. The proposed IEEE 1876-compliant framework offers a resilient and intelligent solution for real-time solar panel maintenance. By leveraging AI, robotics, and edge computing, the system enhances energy efficiency, reduces manual labor, and provides a scalable model for climate-resilient, smart solar infrastructure.},
}
RevDate: 2025-09-01
AI edge cloud service provisioning for knowledge management smart applications.
Scientific reports, 15(1):32246.
This paper investigates a serverless edge-cloud architecture to support knowledge management processes within smart cities, which align with the goals of Society 5.0 to create human-centered, data-driven urban environments. The proposed architecture leverages cloud computing for scalability and on-demand resource provisioning, and edge computing for cost-efficiency and data processing closer to data sources, while also supporting serverless computing for simplified application development. Together, these technologies enhance the responsiveness and efficiency of smart city applications, such as traffic management, public safety, and infrastructure governance, by minimizing latency and improving data handling at scale. Experimental analysis demonstrates the benefits of deploying KM processes on this hybrid architecture, particularly in reducing data transmission times and alleviating network congestion, while at the same time providing options for cost-efficient computations. In addition to that, the study also identifies the characteristics, opportunities and limitations of the edge and cloud environment in terms of computation and network communication times. This architecture represents a flexible framework for advancing knowledge-driven services in smart cities, supporting further development of smart city applications in KM processes.
Additional Links: PMID-40890146
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40890146,
year = {2025},
author = {Maciá-Lillo, A and Mora, H and Jimeno-Morenilla, A and García-D'Urso, NE and Azorín-López, J},
title = {AI edge cloud service provisioning for knowledge management smart applications.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {32246},
pmid = {40890146},
issn = {2045-2322},
support = {PID2023-152804OB-I00//Agencia Estatal de Investigación/ ; PID2023-152804OB-I00//Agencia Estatal de Investigación/ ; PID2023-152804OB-I00//Agencia Estatal de Investigación/ ; PID2023-152804OB-I00//Agencia Estatal de Investigación/ ; PID2023-152804OB-I00//Agencia Estatal de Investigación/ ; },
abstract = {This paper investigates a serverless edge-cloud architecture to support knowledge management processes within smart cities, which align with the goals of Society 5.0 to create human-centered, data-driven urban environments. The proposed architecture leverages cloud computing for scalability and on-demand resource provisioning, and edge computing for cost-efficiency and data processing closer to data sources, while also supporting serverless computing for simplified application development. Together, these technologies enhance the responsiveness and efficiency of smart city applications, such as traffic management, public safety, and infrastructure governance, by minimizing latency and improving data handling at scale. Experimental analysis demonstrates the benefits of deploying KM processes on this hybrid architecture, particularly in reducing data transmission times and alleviating network congestion, while at the same time providing options for cost-efficient computations. In addition to that, the study also identifies the characteristics, opportunities and limitations of the edge and cloud environment in terms of computation and network communication times. This architecture represents a flexible framework for advancing knowledge-driven services in smart cities, supporting further development of smart city applications in KM processes.},
}
RevDate: 2025-08-28
CmpDate: 2025-08-29
Mapping interconnectivity of digital twin healthcare research themes through structural topic modeling.
Scientific reports, 15(1):31734.
Digital twin (DT) technology is revolutionizing healthcare systems by leveraging real-time data integration and advanced analytics to enhance patient care, optimize clinical operations, and facilitate simulation. This study aimed to identify key research trends related to the application of DTs to healthcare using structural topic modeling (STM). Five electronic databases were searched for articles related to healthcare and DT. Using the held-out likelihood, residual, semantic coherence, and lower bound as metrics revealed that the optimal number of topics was eight. The "security solutions to improve data processes and communication in healthcare" topic was positioned at the center of the network and connected to multiple nodes. The "cloud computing and data network architecture" and "machine-learning algorithms for accurate detection and prediction" topics served as a bridge between technical and healthcare topics, suggesting their high potential for use in various fields. The widespread adoption of DTs in healthcare requires robust governance structures to protect individual rights, ensure data security and privacy, and promote transparency and fairness. Compliance with regulatory frameworks, ethical guidelines, and a commitment to accountability are also crucial.
Additional Links: PMID-40877550
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40877550,
year = {2025},
author = {Kim, EM and Lim, Y},
title = {Mapping interconnectivity of digital twin healthcare research themes through structural topic modeling.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {31734},
pmid = {40877550},
issn = {2045-2322},
support = {NRF-2022R1A2C1009890//National Research Foundation of Korea/ ; NRF-2022R1A2C1009890//National Research Foundation of Korea/ ; },
mesh = {Humans ; *Delivery of Health Care ; Computer Security ; Machine Learning ; Algorithms ; Cloud Computing ; },
abstract = {Digital twin (DT) technology is revolutionizing healthcare systems by leveraging real-time data integration and advanced analytics to enhance patient care, optimize clinical operations, and facilitate simulation. This study aimed to identify key research trends related to the application of DTs to healthcare using structural topic modeling (STM). Five electronic databases were searched for articles related to healthcare and DT. Using the held-out likelihood, residual, semantic coherence, and lower bound as metrics revealed that the optimal number of topics was eight. The "security solutions to improve data processes and communication in healthcare" topic was positioned at the center of the network and connected to multiple nodes. The "cloud computing and data network architecture" and "machine-learning algorithms for accurate detection and prediction" topics served as a bridge between technical and healthcare topics, suggesting their high potential for use in various fields. The widespread adoption of DTs in healthcare requires robust governance structures to protect individual rights, ensure data security and privacy, and promote transparency and fairness. Compliance with regulatory frameworks, ethical guidelines, and a commitment to accountability are also crucial.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Delivery of Health Care
Computer Security
Machine Learning
Algorithms
Cloud Computing
RevDate: 2025-08-28
CmpDate: 2025-08-28
Improved modelling of biogenic emissions in human-disturbed forest edges and urban areas.
Nature communications, 16(1):8064.
Biogenic volatile organic compounds (BVOCs) are critical to biosphere-atmosphere interactions, profoundly influencing atmospheric chemistry, air quality and climate, yet accurately estimating their emissions across diverse ecosystems remains challenging. Here we introduce GEE-MEGAN, a cloud-native extension of the widely used MEGAN2.1 model, integrating dynamic satellite-derived land cover and vegetation within Google Earth Engine to produce near-real-time BVOC emissions at 10-30 m resolution, enabling fine-scale tracking of emissions in rapidly changing environments. GEE-MEGAN reduces BVOC emission estimates by 31% and decreases root mean square errors by up to 48.6% relative to MEGAN2.1 in human-disturbed forest edges, and reveals summertime BVOC emissions up to 25‑fold higher than previous estimates in urban areas such as London, Los Angeles, Paris, and Beijing. By capturing fine-scale landscape heterogeneity and human-driven dynamics, GEE-MEGAN significantly improves BVOC emission estimates, providing crucial insights to the complex interactions among BVOCs, climate, and air quality across both natural and human-modified environments.
Additional Links: PMID-40877244
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40877244,
year = {2025},
author = {Zhang, Y and Ran, H and Guenther, A and Zhang, Q and George, C and Mellouki, W and Sheng, G and Peng, P and Wang, X},
title = {Improved modelling of biogenic emissions in human-disturbed forest edges and urban areas.},
journal = {Nature communications},
volume = {16},
number = {1},
pages = {8064},
pmid = {40877244},
issn = {2041-1723},
support = {42321003//National Natural Science Foundation of China (National Science Foundation of China)/ ; },
mesh = {Humans ; *Volatile Organic Compounds/analysis ; *Forests ; *Models, Theoretical ; Cities ; *Environmental Monitoring/methods ; *Air Pollutants/analysis ; Ecosystem ; Air Pollution/analysis ; Climate ; },
abstract = {Biogenic volatile organic compounds (BVOCs) are critical to biosphere-atmosphere interactions, profoundly influencing atmospheric chemistry, air quality and climate, yet accurately estimating their emissions across diverse ecosystems remains challenging. Here we introduce GEE-MEGAN, a cloud-native extension of the widely used MEGAN2.1 model, integrating dynamic satellite-derived land cover and vegetation within Google Earth Engine to produce near-real-time BVOC emissions at 10-30 m resolution, enabling fine-scale tracking of emissions in rapidly changing environments. GEE-MEGAN reduces BVOC emission estimates by 31% and decreases root mean square errors by up to 48.6% relative to MEGAN2.1 in human-disturbed forest edges, and reveals summertime BVOC emissions up to 25‑fold higher than previous estimates in urban areas such as London, Los Angeles, Paris, and Beijing. By capturing fine-scale landscape heterogeneity and human-driven dynamics, GEE-MEGAN significantly improves BVOC emission estimates, providing crucial insights to the complex interactions among BVOCs, climate, and air quality across both natural and human-modified environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Volatile Organic Compounds/analysis
*Forests
*Models, Theoretical
Cities
*Environmental Monitoring/methods
*Air Pollutants/analysis
Ecosystem
Air Pollution/analysis
Climate
RevDate: 2025-08-28
A Comprehensive Evaluation of IoT Cloud Platforms: A Feature-Driven Review with a Decision-Making Tool.
Sensors (Basel, Switzerland), 25(16): pii:s25165124.
The rapid proliferation of Internet of Things (IoT) devices has led to a growing ecosystem of Cloud Platforms designed to manage, process, and analyze IoT data. Selecting the optimal IoT Cloud Platform is a critical decision for businesses and developers, yet it presents a significant challenge due to the diverse range of features, pricing models, and architectural nuances. This manuscript presents a comprehensive, feature-driven review of twelve prominent IoT Cloud Platforms, including AWS IoT Core, IoT on Google Cloud Platform, and Microsoft Azure IoT Hub among others. We meticulously analyze each platform across nine key features: Security, Scalability and Performance, Interoperability, Data Analytics and AI/ML Integration, Edge Computing Support, Pricing Models and Cost-effectiveness, Developer Tools and SDK Support, Compliance and Standards, and Over-The-Air (OTA) Update Capabilities. For each feature, platforms are quantitatively scored (1-10) based on an in-depth assessment of their capabilities and offerings at the time of research. Recognizing the dynamic nature of this domain, we present our findings in a two-dimensional table to provide a clear comparative overview. Furthermore, to empower users in their decision-making process, we introduce a novel, web-based tool for evaluating IoT Cloud Platforms, called the "IoT Cloud Platforms Selector". This interactive tool allows users to assign personalized weights to each feature, dynamically calculating and displaying weighted scores for each platform, thereby facilitating a tailored selection process. This research provides a valuable resource for researchers, practitioners, and organizations seeking to navigate the complex landscape of IoT Cloud Platforms.
Additional Links: PMID-40871988
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40871988,
year = {2025},
author = {Panagou, IC and Katsoulis, S and Nannos, E and Zantalis, F and Koulouras, G},
title = {A Comprehensive Evaluation of IoT Cloud Platforms: A Feature-Driven Review with a Decision-Making Tool.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {16},
pages = {},
doi = {10.3390/s25165124},
pmid = {40871988},
issn = {1424-8220},
abstract = {The rapid proliferation of Internet of Things (IoT) devices has led to a growing ecosystem of Cloud Platforms designed to manage, process, and analyze IoT data. Selecting the optimal IoT Cloud Platform is a critical decision for businesses and developers, yet it presents a significant challenge due to the diverse range of features, pricing models, and architectural nuances. This manuscript presents a comprehensive, feature-driven review of twelve prominent IoT Cloud Platforms, including AWS IoT Core, IoT on Google Cloud Platform, and Microsoft Azure IoT Hub among others. We meticulously analyze each platform across nine key features: Security, Scalability and Performance, Interoperability, Data Analytics and AI/ML Integration, Edge Computing Support, Pricing Models and Cost-effectiveness, Developer Tools and SDK Support, Compliance and Standards, and Over-The-Air (OTA) Update Capabilities. For each feature, platforms are quantitatively scored (1-10) based on an in-depth assessment of their capabilities and offerings at the time of research. Recognizing the dynamic nature of this domain, we present our findings in a two-dimensional table to provide a clear comparative overview. Furthermore, to empower users in their decision-making process, we introduce a novel, web-based tool for evaluating IoT Cloud Platforms, called the "IoT Cloud Platforms Selector". This interactive tool allows users to assign personalized weights to each feature, dynamically calculating and displaying weighted scores for each platform, thereby facilitating a tailored selection process. This research provides a valuable resource for researchers, practitioners, and organizations seeking to navigate the complex landscape of IoT Cloud Platforms.},
}
RevDate: 2025-08-28
CmpDate: 2025-08-28
Computational Architectures for Precision Dairy Nutrition Digital Twins: A Technical Review and Implementation Framework.
Sensors (Basel, Switzerland), 25(16): pii:s25164899.
Sensor-enabled digital twins (DTs) are reshaping precision dairy nutrition by seamlessly integrating real-time barn telemetry with advanced biophysical simulations in the cloud. Drawing insights from 122 peer-reviewed studies spanning 2010-2025, this systematic review reveals how DT architectures for dairy cattle are conceptualized, validated, and deployed. We introduce a novel five-dimensional classification framework-spanning application domain, modeling paradigms, computational topology, validation protocols, and implementation maturity-to provide a coherent comparative lens across diverse DT implementations. Hybrid edge-cloud architectures emerge as optimal solutions, with lightweight CNN-LSTM models embedded in collar or rumen-bolus microcontrollers achieving over 90% accuracy in recognizing feeding and rumination behaviors. Simultaneously, remote cloud systems harness mechanistic fermentation simulations and multi-objective genetic algorithms to optimize feed composition, minimize greenhouse gas emissions, and balance amino acid nutrition. Field-tested prototypes indicate significant agronomic benefits, including 15-20% enhancements in feed conversion efficiency and water use reductions of up to 40%. Nevertheless, critical challenges remain: effectively fusing heterogeneous sensor data amid high barn noise, ensuring millisecond-level synchronization across unreliable rural networks, and rigorously verifying AI-generated nutritional recommendations across varying genotypes, lactation phases, and climates. Overcoming these gaps necessitates integrating explainable AI with biologically grounded digestion models, federated learning protocols for data privacy, and standardized PRISMA-based validation approaches. The distilled implementation roadmap offers actionable guidelines for sensor selection, middleware integration, and model lifecycle management, enabling proactive rather than reactive dairy management-an essential leap toward climate-smart, welfare-oriented, and economically resilient dairy farming.
Additional Links: PMID-40871763
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40871763,
year = {2025},
author = {Rao, S and Neethirajan, S},
title = {Computational Architectures for Precision Dairy Nutrition Digital Twins: A Technical Review and Implementation Framework.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {16},
pages = {},
doi = {10.3390/s25164899},
pmid = {40871763},
issn = {1424-8220},
support = {R37424//NSERC/ ; },
mesh = {Animals ; Cattle ; *Dairying/methods ; Animal Feed ; Female ; Algorithms ; Telemetry ; },
abstract = {Sensor-enabled digital twins (DTs) are reshaping precision dairy nutrition by seamlessly integrating real-time barn telemetry with advanced biophysical simulations in the cloud. Drawing insights from 122 peer-reviewed studies spanning 2010-2025, this systematic review reveals how DT architectures for dairy cattle are conceptualized, validated, and deployed. We introduce a novel five-dimensional classification framework-spanning application domain, modeling paradigms, computational topology, validation protocols, and implementation maturity-to provide a coherent comparative lens across diverse DT implementations. Hybrid edge-cloud architectures emerge as optimal solutions, with lightweight CNN-LSTM models embedded in collar or rumen-bolus microcontrollers achieving over 90% accuracy in recognizing feeding and rumination behaviors. Simultaneously, remote cloud systems harness mechanistic fermentation simulations and multi-objective genetic algorithms to optimize feed composition, minimize greenhouse gas emissions, and balance amino acid nutrition. Field-tested prototypes indicate significant agronomic benefits, including 15-20% enhancements in feed conversion efficiency and water use reductions of up to 40%. Nevertheless, critical challenges remain: effectively fusing heterogeneous sensor data amid high barn noise, ensuring millisecond-level synchronization across unreliable rural networks, and rigorously verifying AI-generated nutritional recommendations across varying genotypes, lactation phases, and climates. Overcoming these gaps necessitates integrating explainable AI with biologically grounded digestion models, federated learning protocols for data privacy, and standardized PRISMA-based validation approaches. The distilled implementation roadmap offers actionable guidelines for sensor selection, middleware integration, and model lifecycle management, enabling proactive rather than reactive dairy management-an essential leap toward climate-smart, welfare-oriented, and economically resilient dairy farming.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Animals
Cattle
*Dairying/methods
Animal Feed
Female
Algorithms
Telemetry
RevDate: 2025-08-28
AI-Powered Adaptive Disability Prediction and Healthcare Analytics Using Smart Technologies.
Diagnostics (Basel, Switzerland), 15(16): pii:diagnostics15162104.
Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients' bodies promptly using biosensors. The dynamic nature of medical devices not only enhances the data analysis in medical services and the prediction of chronic diseases, but also improves remote diagnostics with the latency-aware healthcare system. However, due to scalability and reliability limitations in data processing, most existing healthcare systems pose research challenges in the timely detection of personalized diseases, leading to inconsistent diagnoses, particularly when continuous monitoring is crucial. Methods: This work propose an adaptive and secure framework for disability identification using the Internet of Medical Things (IoMT), integrating edge computing and artificial intelligence. To achieve the shortest response time for medical decisions, the proposed framework explores lightweight edge computing processes that collect physiological and behavioral data using biosensors. Furthermore, it offers a trusted mechanism using decentralized strategies to protect big data analytics from malicious activities and increase authentic access to sensitive medical data. Lastly, it provides personalized healthcare interventions while monitoring healthcare applications using realistic health records, thereby enhancing the system's ability to identify diseases associated with chronic conditions. Results: The proposed framework is tested using simulations, and the results indicate the high accuracy of the healthcare system in detecting disabilities at the edges, while enhancing the prompt response of the cloud server and guaranteeing the security of medical data through lightweight encryption methods and federated learning techniques. Conclusions: The proposed framework offers a secure and efficient solution for identifying disabilities in healthcare systems by leveraging IoMT, edge computing, and AI. It addresses critical challenges in real-time disease monitoring, enhancing diagnostic accuracy and ensuring the protection of sensitive medical data.
Additional Links: PMID-40870956
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40870956,
year = {2025},
author = {Alamri, M and Humayun, M and Haseeb, K and Abbas, N and Ramzan, N},
title = {AI-Powered Adaptive Disability Prediction and Healthcare Analytics Using Smart Technologies.},
journal = {Diagnostics (Basel, Switzerland)},
volume = {15},
number = {16},
pages = {},
doi = {10.3390/diagnostics15162104},
pmid = {40870956},
issn = {2075-4418},
abstract = {Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients' bodies promptly using biosensors. The dynamic nature of medical devices not only enhances the data analysis in medical services and the prediction of chronic diseases, but also improves remote diagnostics with the latency-aware healthcare system. However, due to scalability and reliability limitations in data processing, most existing healthcare systems pose research challenges in the timely detection of personalized diseases, leading to inconsistent diagnoses, particularly when continuous monitoring is crucial. Methods: This work propose an adaptive and secure framework for disability identification using the Internet of Medical Things (IoMT), integrating edge computing and artificial intelligence. To achieve the shortest response time for medical decisions, the proposed framework explores lightweight edge computing processes that collect physiological and behavioral data using biosensors. Furthermore, it offers a trusted mechanism using decentralized strategies to protect big data analytics from malicious activities and increase authentic access to sensitive medical data. Lastly, it provides personalized healthcare interventions while monitoring healthcare applications using realistic health records, thereby enhancing the system's ability to identify diseases associated with chronic conditions. Results: The proposed framework is tested using simulations, and the results indicate the high accuracy of the healthcare system in detecting disabilities at the edges, while enhancing the prompt response of the cloud server and guaranteeing the security of medical data through lightweight encryption methods and federated learning techniques. Conclusions: The proposed framework offers a secure and efficient solution for identifying disabilities in healthcare systems by leveraging IoMT, edge computing, and AI. It addresses critical challenges in real-time disease monitoring, enhancing diagnostic accuracy and ensuring the protection of sensitive medical data.},
}
RevDate: 2025-08-28
Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space-Air-Marine Network.
Entropy (Basel, Switzerland), 27(8): pii:e27080803.
This paper investigates the problem of computation offloading and resource allocation in an integrated space-air-sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud-edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15-60% specifically.
Additional Links: PMID-40870275
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40870275,
year = {2025},
author = {Gao, H},
title = {Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space-Air-Marine Network.},
journal = {Entropy (Basel, Switzerland)},
volume = {27},
number = {8},
pages = {},
doi = {10.3390/e27080803},
pmid = {40870275},
issn = {1099-4300},
support = {62271303//National Natural Science Foundation of China/ ; },
abstract = {This paper investigates the problem of computation offloading and resource allocation in an integrated space-air-sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud-edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15-60% specifically.},
}
RevDate: 2025-08-27
Integrating Google Maps and Smooth Street View Videos for Route Planning.
Journal of imaging, 11(8):.
This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client-server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience.
Additional Links: PMID-40863461
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40863461,
year = {2025},
author = {Massimi, F and Tedeschi, A and Bagadi, K and Benedetto, F},
title = {Integrating Google Maps and Smooth Street View Videos for Route Planning.},
journal = {Journal of imaging},
volume = {11},
number = {8},
pages = {},
pmid = {40863461},
issn = {2313-433X},
abstract = {This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client-server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience.},
}
RevDate: 2025-08-27
CmpDate: 2025-08-27
Application of a "nursing education cloud platform"-based combined and phased training model in the education of standardized-training nurses: A quasi-experimental study.
Medicine, 104(34):e44138.
The evolution of nursing education has rendered traditional standardized-training models increasingly inadequate, primarily due to their inflexible curricula, limited personalized instruction, and delayed feedback loops. While stage-based training models offer improved coherence through structured planning, they encounter difficulties in resource integration and real-time interaction. Contemporary advancements in cloud computing and Internet of Things technologies present novel opportunities for educational reform. Nursing Education Cloud Platform (NECP)-based systems have demonstrated efficacy in medical education, particularly in efficient resource management, data-driven decision-making, and the design of adaptable learning pathways. Despite the nascent implementation of cloud platforms in standardized nurse training, the sustained impact on multifaceted competencies, including professional identity and clinical reasoning, warrants further investigation. The primary objective of this investigation was to assess the effectiveness of a NECP-integrated, phased training model in enhancing standardized-training nurses' theoretical comprehension, practical competencies, professional self-perception, and clinical decision-making capabilities, while also examining its potential to refine nursing education methodologies. This quasi-experimental, non-randomized controlled trial evaluated the impact of a NECP-based training program. The study encompassed an experimental group (n = 56, receiving cloud platform-based training from September 2021 to August 2022) and a control group (n = 56, undergoing traditional training from September 2020 to August 2021). Group assignment was determined by the hospital's annual training schedule, thus employing a natural grouping based on the time period. Propensity score matching was utilized to mitigate baseline characteristic imbalances. The intervention's effects were assessed across several domains, including theoretical knowledge, operational skills, professional identity, and clinical reasoning abilities. ANCOVA was employed to account for temporal covariates. The experimental group scored significantly higher than the control group in theoretical knowledge (88.70 ± 5.07 vs 75.55 ± 9.01, P < .05), operational skills (94.27 ± 2.04 vs 90.95 ± 3.69, P < .05), professional identity (73.18 ± 10.18 vs 62.54 ± 15.48, P < .05), and clinical reasoning ability (60.95 ± 8.90 vs 51.09 ± 12.28, P < .05). The integration of the "NECP" with a phased training model demonstrates efficacy in augmenting nurses' competencies. However, the potential for selection bias, inherent in the non-randomized design, warrants careful consideration in the interpretation of these findings. Further investigation, specifically through multicenter longitudinal studies, is recommended to ascertain the generalizability of these results.
Additional Links: PMID-40859495
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40859495,
year = {2025},
author = {Tang, H and Yuan, Y and Liu, H and Hu, S},
title = {Application of a "nursing education cloud platform"-based combined and phased training model in the education of standardized-training nurses: A quasi-experimental study.},
journal = {Medicine},
volume = {104},
number = {34},
pages = {e44138},
pmid = {40859495},
issn = {1536-5964},
mesh = {Humans ; *Cloud Computing ; Clinical Competence ; Female ; Male ; Adult ; Models, Educational ; *Education, Nursing/methods ; Curriculum ; },
abstract = {The evolution of nursing education has rendered traditional standardized-training models increasingly inadequate, primarily due to their inflexible curricula, limited personalized instruction, and delayed feedback loops. While stage-based training models offer improved coherence through structured planning, they encounter difficulties in resource integration and real-time interaction. Contemporary advancements in cloud computing and Internet of Things technologies present novel opportunities for educational reform. Nursing Education Cloud Platform (NECP)-based systems have demonstrated efficacy in medical education, particularly in efficient resource management, data-driven decision-making, and the design of adaptable learning pathways. Despite the nascent implementation of cloud platforms in standardized nurse training, the sustained impact on multifaceted competencies, including professional identity and clinical reasoning, warrants further investigation. The primary objective of this investigation was to assess the effectiveness of a NECP-integrated, phased training model in enhancing standardized-training nurses' theoretical comprehension, practical competencies, professional self-perception, and clinical decision-making capabilities, while also examining its potential to refine nursing education methodologies. This quasi-experimental, non-randomized controlled trial evaluated the impact of a NECP-based training program. The study encompassed an experimental group (n = 56, receiving cloud platform-based training from September 2021 to August 2022) and a control group (n = 56, undergoing traditional training from September 2020 to August 2021). Group assignment was determined by the hospital's annual training schedule, thus employing a natural grouping based on the time period. Propensity score matching was utilized to mitigate baseline characteristic imbalances. The intervention's effects were assessed across several domains, including theoretical knowledge, operational skills, professional identity, and clinical reasoning abilities. ANCOVA was employed to account for temporal covariates. The experimental group scored significantly higher than the control group in theoretical knowledge (88.70 ± 5.07 vs 75.55 ± 9.01, P < .05), operational skills (94.27 ± 2.04 vs 90.95 ± 3.69, P < .05), professional identity (73.18 ± 10.18 vs 62.54 ± 15.48, P < .05), and clinical reasoning ability (60.95 ± 8.90 vs 51.09 ± 12.28, P < .05). The integration of the "NECP" with a phased training model demonstrates efficacy in augmenting nurses' competencies. However, the potential for selection bias, inherent in the non-randomized design, warrants careful consideration in the interpretation of these findings. Further investigation, specifically through multicenter longitudinal studies, is recommended to ascertain the generalizability of these results.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Cloud Computing
Clinical Competence
Female
Male
Adult
Models, Educational
*Education, Nursing/methods
Curriculum
RevDate: 2025-08-26
CmpDate: 2025-08-26
Comparing Multiple Imputation Methods to Address Missing Patient Demographics in Immunization Information Systems: Retrospective Cohort Study.
JMIR public health and surveillance, 11:e73916 pii:v11i1e73916.
BACKGROUND: Immunization Information Systems (IIS) and surveillance data are essential for public health interventions and programming; however, missing data are often a challenge, potentially introducing bias and impacting the accuracy of vaccine coverage assessments, particularly in addressing disparities.
OBJECTIVE: This study aimed to evaluate the performance of 3 multiple imputation methods, Stata's (StataCorp LLC) multiple imputation using chained equations (MICE), scikit-learn's Iterative-Imputer, and Python's miceforest package, in managing missing race and ethnicity data in large-scale surveillance datasets. We compared these methodologies in their ability to preserve demographic distribution, computational efficiency, and performed G-tests on contingency tables to obtain likelihood ratio statistics to assess the association between race and ethnicity and flu vaccination status.
METHODS: In this retrospective cohort study, we analyzed 2021-2022 flu vaccination and demographic data from the West Virginia Immunization Information System (N=2,302,036), where race (15%) and ethnicity (34%) were missing. MICE, Iterative Imputer, and miceforest were used to impute missing variables, generating 15 datasets each. Computational efficiency, demographic distribution preservation, and spatial clustering patterns were assessed using G-statistics.
RESULTS: After imputation, an additional 780,339 observations were obtained compared with complete case analysis. All imputation methods exhibited significant spatial clustering for race imputation (G-statistics: MICE=26,452.7, Iterative-Imputer=128,280.3, Miceforest=26,891.5; P<.001), while ethnicity imputation showed variable clustering patterns (G-statistics: MICE=1142.2, Iterative-Imputer=1.7, Miceforest=2185.0; P: MICE<.001, Iterative-Imputer=1.7, Miceforest<.001). MICE and miceforest best preserved the proportional distribution of demographics. Computational efficiency varied, with MICE requiring 14 hours, Iterative Imputer 2 minutes, and miceforest 10 minutes for 15 imputations. Postimputation estimates indicated a 0.87%-18% reduction in stratified flu vaccination coverage rates. Overall estimated flu vaccination rates decreased from 26% to 19% after imputations.
CONCLUSIONS: Both MICE and Miceforest offer flexible and reliable approaches for imputing missing demographic data while mitigating bias compared with Iterative-Imputer. Our results also highlight that the imputation method can profoundly affect research findings. Though MICE and Miceforest had better effect sizes and reliability, MICE was much more computationally and time-expensive, limiting its use in large, surveillance datasets. Miceforest can use cloud-based computing, which further enhances efficiency by offloading resource-intensive tasks, enabling parallel execution, and minimizing processing delays. The significant decrease in vaccination coverage estimates validates how incomplete or missing data can eclipse real disparities. Our findings support regular application of imputation methods in immunization surveillance to improve health equity evaluations and shape targeted public health interventions and programming.
Additional Links: PMID-40857554
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40857554,
year = {2025},
author = {Brown, S and Kudia, O and Kleine, K and Kidd, B and Wines, R and Meckes, N},
title = {Comparing Multiple Imputation Methods to Address Missing Patient Demographics in Immunization Information Systems: Retrospective Cohort Study.},
journal = {JMIR public health and surveillance},
volume = {11},
number = {},
pages = {e73916},
doi = {10.2196/73916},
pmid = {40857554},
issn = {2369-2960},
mesh = {Retrospective Studies ; Humans ; Female ; Male ; *Information Systems/statistics & numerical data ; *Demography/statistics & numerical data ; Child, Preschool ; Cohort Studies ; Adolescent ; Child ; *Immunization/statistics & numerical data ; Adult ; Infant ; Middle Aged ; },
abstract = {BACKGROUND: Immunization Information Systems (IIS) and surveillance data are essential for public health interventions and programming; however, missing data are often a challenge, potentially introducing bias and impacting the accuracy of vaccine coverage assessments, particularly in addressing disparities.
OBJECTIVE: This study aimed to evaluate the performance of 3 multiple imputation methods, Stata's (StataCorp LLC) multiple imputation using chained equations (MICE), scikit-learn's Iterative-Imputer, and Python's miceforest package, in managing missing race and ethnicity data in large-scale surveillance datasets. We compared these methodologies in their ability to preserve demographic distribution, computational efficiency, and performed G-tests on contingency tables to obtain likelihood ratio statistics to assess the association between race and ethnicity and flu vaccination status.
METHODS: In this retrospective cohort study, we analyzed 2021-2022 flu vaccination and demographic data from the West Virginia Immunization Information System (N=2,302,036), where race (15%) and ethnicity (34%) were missing. MICE, Iterative Imputer, and miceforest were used to impute missing variables, generating 15 datasets each. Computational efficiency, demographic distribution preservation, and spatial clustering patterns were assessed using G-statistics.
RESULTS: After imputation, an additional 780,339 observations were obtained compared with complete case analysis. All imputation methods exhibited significant spatial clustering for race imputation (G-statistics: MICE=26,452.7, Iterative-Imputer=128,280.3, Miceforest=26,891.5; P<.001), while ethnicity imputation showed variable clustering patterns (G-statistics: MICE=1142.2, Iterative-Imputer=1.7, Miceforest=2185.0; P: MICE<.001, Iterative-Imputer=1.7, Miceforest<.001). MICE and miceforest best preserved the proportional distribution of demographics. Computational efficiency varied, with MICE requiring 14 hours, Iterative Imputer 2 minutes, and miceforest 10 minutes for 15 imputations. Postimputation estimates indicated a 0.87%-18% reduction in stratified flu vaccination coverage rates. Overall estimated flu vaccination rates decreased from 26% to 19% after imputations.
CONCLUSIONS: Both MICE and Miceforest offer flexible and reliable approaches for imputing missing demographic data while mitigating bias compared with Iterative-Imputer. Our results also highlight that the imputation method can profoundly affect research findings. Though MICE and Miceforest had better effect sizes and reliability, MICE was much more computationally and time-expensive, limiting its use in large, surveillance datasets. Miceforest can use cloud-based computing, which further enhances efficiency by offloading resource-intensive tasks, enabling parallel execution, and minimizing processing delays. The significant decrease in vaccination coverage estimates validates how incomplete or missing data can eclipse real disparities. Our findings support regular application of imputation methods in immunization surveillance to improve health equity evaluations and shape targeted public health interventions and programming.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Retrospective Studies
Humans
Female
Male
*Information Systems/statistics & numerical data
*Demography/statistics & numerical data
Child, Preschool
Cohort Studies
Adolescent
Child
*Immunization/statistics & numerical data
Adult
Infant
Middle Aged
RevDate: 2025-08-26
CmpDate: 2025-08-26
Modular and cloud-based bioinformatics pipelines for high-confidence biomarker detection in cancer immunotherapy clinical trials.
PloS one, 20(8):e0330827 pii:PONE-D-25-08135.
BACKGROUND: The Cancer Immune Monitoring and Analysis Centers - Cancer Immunologic Data Center (CIMAC-CIDC) network aims to improve cancer immunotherapy by providing harmonized molecular assays and standardized bioinformatics analysis.
RESULTS: In response to evolving bioinformatics standards and the migration of the CIDC to the National Cancer Institute (NCI), we undertook the enhancement of the CIDC's extant whole exome sequencing (WES) and RNA sequencing (RNA-Seq) pipelines. Leveraging open-source tools and cloud-based technologies, we implemented modular workflows using Snakemake and Docker for efficient deployment on the Google Cloud Platform (GCP). Benchmarking analyses demonstrate improved reproducibility, precision, and recall across validated truth sets for variant calling, transcript quantification, and fusion detection.
CONCLUSION: This work establishes a scalable framework for harmonized multi-omic analyses, ensuring the continuity and reliability of bioinformatics workflows in multi-site clinical research aimed at advancing cancer biomarker discovery and personalized medicine.
Additional Links: PMID-40857351
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40857351,
year = {2025},
author = {Nguyen, C and Nguyen, T and Trivitt, G and Capaldo, B and Yan, C and Chen, Q and Renzette, N and Topaloglu, U and Meerzaman, D},
title = {Modular and cloud-based bioinformatics pipelines for high-confidence biomarker detection in cancer immunotherapy clinical trials.},
journal = {PloS one},
volume = {20},
number = {8},
pages = {e0330827},
doi = {10.1371/journal.pone.0330827},
pmid = {40857351},
issn = {1932-6203},
mesh = {Humans ; *Computational Biology/methods ; *Neoplasms/therapy/genetics/immunology ; *Biomarkers, Tumor/genetics ; *Immunotherapy/methods ; *Cloud Computing ; Clinical Trials as Topic ; Exome Sequencing ; Reproducibility of Results ; },
abstract = {BACKGROUND: The Cancer Immune Monitoring and Analysis Centers - Cancer Immunologic Data Center (CIMAC-CIDC) network aims to improve cancer immunotherapy by providing harmonized molecular assays and standardized bioinformatics analysis.
RESULTS: In response to evolving bioinformatics standards and the migration of the CIDC to the National Cancer Institute (NCI), we undertook the enhancement of the CIDC's extant whole exome sequencing (WES) and RNA sequencing (RNA-Seq) pipelines. Leveraging open-source tools and cloud-based technologies, we implemented modular workflows using Snakemake and Docker for efficient deployment on the Google Cloud Platform (GCP). Benchmarking analyses demonstrate improved reproducibility, precision, and recall across validated truth sets for variant calling, transcript quantification, and fusion detection.
CONCLUSION: This work establishes a scalable framework for harmonized multi-omic analyses, ensuring the continuity and reliability of bioinformatics workflows in multi-site clinical research aimed at advancing cancer biomarker discovery and personalized medicine.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Computational Biology/methods
*Neoplasms/therapy/genetics/immunology
*Biomarkers, Tumor/genetics
*Immunotherapy/methods
*Cloud Computing
Clinical Trials as Topic
Exome Sequencing
Reproducibility of Results
RevDate: 2025-08-25
Monitoring LULC dynamics and detecting transformation hotspots in sylhet, Bangladesh (2000-2023) using Google Earth Engine.
Scientific reports, 15(1):31263.
Sylhet, located in the northeastern part of Bangladesh, is characterized by a unique topography and climatic conditions that make it susceptible to flash floods. The interplay of rapid urbanization and climatic variability has exacerbated these flood risks in recent years. Effective monitoring and planning of land use/land cover (LULC) are crucial strategies for mitigating these hazards. While former studies analyzed LULC in parts of Sylhet using traditional GIS approaches, no comprehensive, district-wide assessment has been carried out using long-term satellite data and cloud computing platforms. This study addresses that gap by applying Google Earth Engine (GEE) for an extensive analysis of LULC changes, transitions, and hot/cold spots across the district. Accordingly, this work investigates the LULC changes in Sylhet district over the past twenty-three years (2000-2023). Using satellite imagery from Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI), the LULC is classified in six selected years (2000, 2005, 2010, 2015, 2020, and 2023). A supervised machine learning algorithm, the Random Forest Classifier, is employed on the cloud computing platform Google Earth Engine to analyze LULC dynamics and detect changes. The Getis-Ord Gi[*] statistical model is applied to identify land transformation hot spot and cold spot areas. The results reveal a significant increase in built-up areas and a corresponding reduction in water bodies. Spatial analysis at the upazila level indicates urban expansion in every upazila, with the most substantial increase observed in Beani Bazar upazila, where urban areas expanded by approximately 1500%. Conversely, Bishwanath upazila experienced the greatest reduction in water bodies, with a decrease of about 90%. Sylhet Sadar upazila showed a 240% increase in urban areas and a 72% decrease in water bodies. According to hotspot analysis, Kanaighat upazila has the most amount of unchanging land at 7%, whereas Balaganj upazila has the largest amount of LULC transformation at 5.5%. Overall, the urban area in the Sylhet district has grown by approximately 300%, while water bodies have diminished by about 77%, reflecting trends of urbanization and river-filling. These findings underscore the necessity of ensuring adequate drainage infrastructure to decrease flash flood hazards in the Sylhet district and offer insightful information to relevant authorities, politicians, and water resource engineers.
Additional Links: PMID-40855077
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40855077,
year = {2025},
author = {Nazmul Haque, SM and Uddin, MJ},
title = {Monitoring LULC dynamics and detecting transformation hotspots in sylhet, Bangladesh (2000-2023) using Google Earth Engine.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {31263},
pmid = {40855077},
issn = {2045-2322},
abstract = {Sylhet, located in the northeastern part of Bangladesh, is characterized by a unique topography and climatic conditions that make it susceptible to flash floods. The interplay of rapid urbanization and climatic variability has exacerbated these flood risks in recent years. Effective monitoring and planning of land use/land cover (LULC) are crucial strategies for mitigating these hazards. While former studies analyzed LULC in parts of Sylhet using traditional GIS approaches, no comprehensive, district-wide assessment has been carried out using long-term satellite data and cloud computing platforms. This study addresses that gap by applying Google Earth Engine (GEE) for an extensive analysis of LULC changes, transitions, and hot/cold spots across the district. Accordingly, this work investigates the LULC changes in Sylhet district over the past twenty-three years (2000-2023). Using satellite imagery from Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI), the LULC is classified in six selected years (2000, 2005, 2010, 2015, 2020, and 2023). A supervised machine learning algorithm, the Random Forest Classifier, is employed on the cloud computing platform Google Earth Engine to analyze LULC dynamics and detect changes. The Getis-Ord Gi[*] statistical model is applied to identify land transformation hot spot and cold spot areas. The results reveal a significant increase in built-up areas and a corresponding reduction in water bodies. Spatial analysis at the upazila level indicates urban expansion in every upazila, with the most substantial increase observed in Beani Bazar upazila, where urban areas expanded by approximately 1500%. Conversely, Bishwanath upazila experienced the greatest reduction in water bodies, with a decrease of about 90%. Sylhet Sadar upazila showed a 240% increase in urban areas and a 72% decrease in water bodies. According to hotspot analysis, Kanaighat upazila has the most amount of unchanging land at 7%, whereas Balaganj upazila has the largest amount of LULC transformation at 5.5%. Overall, the urban area in the Sylhet district has grown by approximately 300%, while water bodies have diminished by about 77%, reflecting trends of urbanization and river-filling. These findings underscore the necessity of ensuring adequate drainage infrastructure to decrease flash flood hazards in the Sylhet district and offer insightful information to relevant authorities, politicians, and water resource engineers.},
}
RevDate: 2025-08-25
On-device AI for climate-resilient farming with intelligent crop yield prediction using lightweight models on smart agricultural devices.
Scientific reports, 15(1):31195.
In Recent time, with the utilization of Artificial Intelligence (AI), AI applications have proliferated across various domains where agricultural consumer electronics are no exception. These innovations have significantly enhanced the intelligence of agricultural processes, leading to increased efficiency and sustainability. This study introduces an intelligent crop yield prediction system that utilizes Random Forest (RF) classifier to optimize the usage of water based on environmental factors. By integrating lightweight machine learning with consumer electronics such as sensors connected inside the smart display devices, this work is aimed to amplify water management and promote sustainable farming practices. While focusing on the sustainable agriculture, the water usage efficiency in irrigation should be enhanced by predicting optimal watering schedules and it will reduce the environmental impact and support the climate resilient farming. The proposed lightweight model has been trained on real-time agricultural data with minimum memory resource in sustainability prediction and the model has achieved 90.1% accuracy in the detection of crop yield suitable for the farmland as well as outperformed the existing methods including AI-enabled IoT model with mobile sensors and deep learning architectures (89%), LoRa-based systems (87.2%), and adaptive AI with self-learning techniques (88%). The deployment of computationally efficient machine learning models like random forest algorithms will emphasis on real time decision making without depending on the cloud computing. The performance evaluation and effectiveness of the proposed method are estimated using the important parameter called prediction accuracy. The main goal of this parameter is to access how the AI model accurately predicts the irrigation needs based on the sensor data.
Additional Links: PMID-40854942
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40854942,
year = {2025},
author = {Dhanaraj, RK and Maragatharajan, M and Sureshkumar, A and Balakannan, SP},
title = {On-device AI for climate-resilient farming with intelligent crop yield prediction using lightweight models on smart agricultural devices.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {31195},
pmid = {40854942},
issn = {2045-2322},
abstract = {In Recent time, with the utilization of Artificial Intelligence (AI), AI applications have proliferated across various domains where agricultural consumer electronics are no exception. These innovations have significantly enhanced the intelligence of agricultural processes, leading to increased efficiency and sustainability. This study introduces an intelligent crop yield prediction system that utilizes Random Forest (RF) classifier to optimize the usage of water based on environmental factors. By integrating lightweight machine learning with consumer electronics such as sensors connected inside the smart display devices, this work is aimed to amplify water management and promote sustainable farming practices. While focusing on the sustainable agriculture, the water usage efficiency in irrigation should be enhanced by predicting optimal watering schedules and it will reduce the environmental impact and support the climate resilient farming. The proposed lightweight model has been trained on real-time agricultural data with minimum memory resource in sustainability prediction and the model has achieved 90.1% accuracy in the detection of crop yield suitable for the farmland as well as outperformed the existing methods including AI-enabled IoT model with mobile sensors and deep learning architectures (89%), LoRa-based systems (87.2%), and adaptive AI with self-learning techniques (88%). The deployment of computationally efficient machine learning models like random forest algorithms will emphasis on real time decision making without depending on the cloud computing. The performance evaluation and effectiveness of the proposed method are estimated using the important parameter called prediction accuracy. The main goal of this parameter is to access how the AI model accurately predicts the irrigation needs based on the sensor data.},
}
RevDate: 2025-08-25
Cloud-Based Control System with Sensing and Actuating Textile-Based IoT Gloves for Telerehabilitation Applications.
Advanced intelligent systems (Weinheim an der Bergstrasse, Germany), 7(8):2400894.
Remote manipulation devices extend human capabilities over vast distances or in inaccessible environments, removing constraints between patients and treatment. The integration of therapeutic and assistive devices with the Internet of Things (IoT) has demonstrated high potential to develop and enhance intelligent rehabilitation systems in the e-health domain. Within such devices, soft robotic products distinguish themselves through their lightweight and adaptable characteristics, facilitating secure collaboration between humans and robots. The objective of this research is to combine a textile-based sensorized glove with an air-driven soft robotic glove, operated wirelessly using the developed control system architecture. The sensing glove equipped with capacitive sensors on each finger captures the movements of the medical staff's hand. Meanwhile, the pneumatic rehabilitation glove designed to aid patients affected by impaired hand function due to stroke, brain injury, or spinal cord injury replicates the movements of the medical personnel. The proposed artificial intelligence-based system detects finger gestures and actuates the pneumatic system, responding within an average response time of 48.4 ms. The evaluation of the system further in terms of accuracy and transmission quality metrics verifies the feasibility of the proposed system integrating textile gloves into IoT infrastructure, enabling remote motion sensing and actuation.
Additional Links: PMID-40852091
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40852091,
year = {2025},
author = {Ozlem, K and Gumus, C and Yilmaz, AF and Tuncay Atalay, A and Atalay, O and Ince, G},
title = {Cloud-Based Control System with Sensing and Actuating Textile-Based IoT Gloves for Telerehabilitation Applications.},
journal = {Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)},
volume = {7},
number = {8},
pages = {2400894},
pmid = {40852091},
issn = {2640-4567},
abstract = {Remote manipulation devices extend human capabilities over vast distances or in inaccessible environments, removing constraints between patients and treatment. The integration of therapeutic and assistive devices with the Internet of Things (IoT) has demonstrated high potential to develop and enhance intelligent rehabilitation systems in the e-health domain. Within such devices, soft robotic products distinguish themselves through their lightweight and adaptable characteristics, facilitating secure collaboration between humans and robots. The objective of this research is to combine a textile-based sensorized glove with an air-driven soft robotic glove, operated wirelessly using the developed control system architecture. The sensing glove equipped with capacitive sensors on each finger captures the movements of the medical staff's hand. Meanwhile, the pneumatic rehabilitation glove designed to aid patients affected by impaired hand function due to stroke, brain injury, or spinal cord injury replicates the movements of the medical personnel. The proposed artificial intelligence-based system detects finger gestures and actuates the pneumatic system, responding within an average response time of 48.4 ms. The evaluation of the system further in terms of accuracy and transmission quality metrics verifies the feasibility of the proposed system integrating textile gloves into IoT infrastructure, enabling remote motion sensing and actuation.},
}
RevDate: 2025-08-25
Digital twin for personalized medicine development.
Frontiers in digital health, 7:1583466.
Digital Twin (DT) technology is revolutionizing healthcare by enabling real-time monitoring, predictive analytics, and highly personalized medical care. As a key innovation of Industry 4.0, DTs integrate advanced tools like artificial intelligence (AI), the Internet of Things (IoT), and machine learning (ML) to create dynamic, data-driven replicas of patients. These digital replicas allow simulations of disease progression, optimize diagnostics, and personalize treatment plans based on individual genetic and lifestyle profiles. This review explores the evolution, architecture, and enabling technologies of DTs, focusing on their transformative applications in personalized medicine (PM). While the integration of DTs offers immense potential to improve outcomes and efficiency in healthcare, challenges such as data privacy, system interoperability, and ethical concerns must be addressed. The paper concludes by highlighting future directions, where AI, cloud computing, and blockchain are expected to play a pivotal role in overcoming these limitations and advancing precision medicine.
Additional Links: PMID-40851640
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40851640,
year = {2025},
author = {Saratkar, SY and Langote, M and Kumar, P and Gote, P and Weerarathna, IN and Mishra, GV},
title = {Digital twin for personalized medicine development.},
journal = {Frontiers in digital health},
volume = {7},
number = {},
pages = {1583466},
pmid = {40851640},
issn = {2673-253X},
abstract = {Digital Twin (DT) technology is revolutionizing healthcare by enabling real-time monitoring, predictive analytics, and highly personalized medical care. As a key innovation of Industry 4.0, DTs integrate advanced tools like artificial intelligence (AI), the Internet of Things (IoT), and machine learning (ML) to create dynamic, data-driven replicas of patients. These digital replicas allow simulations of disease progression, optimize diagnostics, and personalize treatment plans based on individual genetic and lifestyle profiles. This review explores the evolution, architecture, and enabling technologies of DTs, focusing on their transformative applications in personalized medicine (PM). While the integration of DTs offers immense potential to improve outcomes and efficiency in healthcare, challenges such as data privacy, system interoperability, and ethical concerns must be addressed. The paper concludes by highlighting future directions, where AI, cloud computing, and blockchain are expected to play a pivotal role in overcoming these limitations and advancing precision medicine.},
}
RevDate: 2025-08-24
CmpDate: 2025-08-24
Handheld NIR spectroscopy for real-time on-site food quality and safety monitoring.
Advances in food and nutrition research, 115:293-389.
This chapter reviews the applications and future directions of portable near-infrared (NIR) spectroscopy in food analytics, with a focus on quality control, safety monitoring, and fraud detection. Portable NIR spectrometers are essential for real-time, non-destructive analysis of food composition, and their use is rapidly expanding across various stages of the food production chain-from agriculture and processing to retail and consumer applications. The functional design of miniaturized NIR spectrometers is examined, linking the technological diversity of these sensors to their application potential in specific roles within the food sector, while discussing challenges related to thermal stability, energy efficiency, and spectral accuracy. Current trends in data analysis, including chemometrics and artificial intelligence, are also highlighted, as the successful application of portable spectroscopy heavily depends on this key aspect of the analytical process. This discussion is based on recent literature, with a focus on the last five years, and addresses the application of portable NIR spectroscopy in food quality assessment and composition analysis, food safety and contaminant detection, and food authentication and fraud prevention. The chapter concludes that portable NIR spectroscopy has significantly enhanced food analytics over the past decade, with ongoing trends likely to lead to even wider adoption in the near future. Future challenges related to ultra-miniaturization and emerging consumer-oriented spectrometers emphasize the need for robust pre-calibrated models and the development of global models for key applications. The integration of NIR spectrometers with cloud computing, IoT, and machine learning is expected to drive advancements in real-time monitoring, predictive modeling, and data processing, fitting the growing demand for improved safety, quality, and fraud detection from the farm to the fork.
Additional Links: PMID-40850704
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40850704,
year = {2025},
author = {Beć, KB and Grabska, J and Huck, CW},
title = {Handheld NIR spectroscopy for real-time on-site food quality and safety monitoring.},
journal = {Advances in food and nutrition research},
volume = {115},
number = {},
pages = {293-389},
doi = {10.1016/bs.afnr.2025.01.002},
pmid = {40850704},
issn = {1043-4526},
mesh = {Spectroscopy, Near-Infrared/instrumentation/methods ; *Food Safety/methods ; *Food Quality ; *Food Analysis/methods/instrumentation ; Food Contamination/analysis ; Humans ; Quality Control ; },
abstract = {This chapter reviews the applications and future directions of portable near-infrared (NIR) spectroscopy in food analytics, with a focus on quality control, safety monitoring, and fraud detection. Portable NIR spectrometers are essential for real-time, non-destructive analysis of food composition, and their use is rapidly expanding across various stages of the food production chain-from agriculture and processing to retail and consumer applications. The functional design of miniaturized NIR spectrometers is examined, linking the technological diversity of these sensors to their application potential in specific roles within the food sector, while discussing challenges related to thermal stability, energy efficiency, and spectral accuracy. Current trends in data analysis, including chemometrics and artificial intelligence, are also highlighted, as the successful application of portable spectroscopy heavily depends on this key aspect of the analytical process. This discussion is based on recent literature, with a focus on the last five years, and addresses the application of portable NIR spectroscopy in food quality assessment and composition analysis, food safety and contaminant detection, and food authentication and fraud prevention. The chapter concludes that portable NIR spectroscopy has significantly enhanced food analytics over the past decade, with ongoing trends likely to lead to even wider adoption in the near future. Future challenges related to ultra-miniaturization and emerging consumer-oriented spectrometers emphasize the need for robust pre-calibrated models and the development of global models for key applications. The integration of NIR spectrometers with cloud computing, IoT, and machine learning is expected to drive advancements in real-time monitoring, predictive modeling, and data processing, fitting the growing demand for improved safety, quality, and fraud detection from the farm to the fork.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Spectroscopy, Near-Infrared/instrumentation/methods
*Food Safety/methods
*Food Quality
*Food Analysis/methods/instrumentation
Food Contamination/analysis
Humans
Quality Control
RevDate: 2025-08-21
CmpDate: 2025-08-21
An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computing.
PloS one, 20(8):e0329669 pii:PONE-D-24-45416.
With the increasing popularity of cloud computing services, their large and dynamic load characteristics have rendered task scheduling an NP-complete problem.To address the problem of large-scale task scheduling in a cloud computing environment, this paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environments by continuously learning and updating network parameters. Experiments demonstrate that it skillfully balances cost and performance. In low-load situations, costs are reduced by using low-cost nodes within the Service Level Agreement (SLA) range; in high-load situations, resource utilization is improved through load balancing. Compared with classical heuristic algorithms, it effectively optimizes load balancing, cost, and overdue time, achieving a 10% overall improvement. The experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling is its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. There are still shortcomings in the method used in this study. First, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework's performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.
Additional Links: PMID-40839622
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40839622,
year = {2025},
author = {Cui, D and Peng, Z and Li, K and Li, Q and He, J and Deng, X},
title = {An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computing.},
journal = {PloS one},
volume = {20},
number = {8},
pages = {e0329669},
doi = {10.1371/journal.pone.0329669},
pmid = {40839622},
issn = {1932-6203},
mesh = {*Cloud Computing ; *Deep Learning ; Algorithms ; Humans ; Reinforcement Machine Learning ; },
abstract = {With the increasing popularity of cloud computing services, their large and dynamic load characteristics have rendered task scheduling an NP-complete problem.To address the problem of large-scale task scheduling in a cloud computing environment, this paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environments by continuously learning and updating network parameters. Experiments demonstrate that it skillfully balances cost and performance. In low-load situations, costs are reduced by using low-cost nodes within the Service Level Agreement (SLA) range; in high-load situations, resource utilization is improved through load balancing. Compared with classical heuristic algorithms, it effectively optimizes load balancing, cost, and overdue time, achieving a 10% overall improvement. The experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling is its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. There are still shortcomings in the method used in this study. First, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework's performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Deep Learning
Algorithms
Humans
Reinforcement Machine Learning
RevDate: 2025-08-20
A scalable machine learning strategy for resource allocation in database.
Scientific reports, 15(1):30567.
Modern cloud computing systems require intelligent resource allocation strategies that balance quality-of-service (QoS), operational costs, and energy sustainability. Existing deep Q-learning (DQN) methods suffer from sample inefficiency, centralization bottlenecks, and reactive decision-making during workload spikes. Transformer-based forecasting models such as Temporal Fusion Transformer (TFT) offer improved accuracy but introduce computational overhead, limiting real-time deployment. We propose LSTM-MARL-Ape-X, a novel framework integrating bidirectional Long Short-Term Memory (BiLSTM) for workload forecasting with Multi-Agent Reinforcement Learning (MARL) in a distributed Ape-X architecture. This approach enables proactive, decentralized, and scalable resource management through three innovations: high-accuracy forecasting using BiLSTM with feature-wise attention, variance-regularized credit assignment for stable multi-agent coordination, and faster convergence via adaptive prioritized replay. Experimental validation on real-world traces demonstrates 94.6% SLA compliance, 22% reduction in energy consumption, and linear scalability to over 5,000 nodes with sub-100 ms decision latency. The framework converges 3.2× faster than uniform sampling baselines and outperforms transformer-based models in both accuracy and inference speed. Unlike decoupled prediction-action frameworks, our method provides end-to-end optimization, enabling robust and sustainable cloud orchestration at scale.
Additional Links: PMID-40835668
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40835668,
year = {2025},
author = {Manhary, FN and Mohamed, MH and Farouk, M},
title = {A scalable machine learning strategy for resource allocation in database.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {30567},
pmid = {40835668},
issn = {2045-2322},
abstract = {Modern cloud computing systems require intelligent resource allocation strategies that balance quality-of-service (QoS), operational costs, and energy sustainability. Existing deep Q-learning (DQN) methods suffer from sample inefficiency, centralization bottlenecks, and reactive decision-making during workload spikes. Transformer-based forecasting models such as Temporal Fusion Transformer (TFT) offer improved accuracy but introduce computational overhead, limiting real-time deployment. We propose LSTM-MARL-Ape-X, a novel framework integrating bidirectional Long Short-Term Memory (BiLSTM) for workload forecasting with Multi-Agent Reinforcement Learning (MARL) in a distributed Ape-X architecture. This approach enables proactive, decentralized, and scalable resource management through three innovations: high-accuracy forecasting using BiLSTM with feature-wise attention, variance-regularized credit assignment for stable multi-agent coordination, and faster convergence via adaptive prioritized replay. Experimental validation on real-world traces demonstrates 94.6% SLA compliance, 22% reduction in energy consumption, and linear scalability to over 5,000 nodes with sub-100 ms decision latency. The framework converges 3.2× faster than uniform sampling baselines and outperforms transformer-based models in both accuracy and inference speed. Unlike decoupled prediction-action frameworks, our method provides end-to-end optimization, enabling robust and sustainable cloud orchestration at scale.},
}
RevDate: 2025-08-19
Design and evaluation of next-generation HIV genotyping for detection of resistance mutations to 28 antiretroviral drugs across five major classes including lenacapavir.
Clinical infectious diseases : an official publication of the Infectious Diseases Society of America pii:8237671 [Epub ahead of print].
BACKGROUND: The emergence and spread of HIV drug-resistant strains present a major barrier to effective lifelong Antiretroviral Therapy (ART). The anticipated rise in long-acting subcutaneous lenacapavir (LEN) use, along with the increased risk of transmitted resistance and Pre-Exposure Prophylaxis (PrEP)-associated resistance, underscores the urgent need for advanced genotyping methods to enhance clinical care and prevention strategies.
METHODS: We developed the Portable HIV Genotyping (PHG) platform which combines cost-effective next-generation sequencing with cloud computing to screen for resistance to 28 antiretroviral drugs across five major classes, including LEN. We analyzed three study cohorts and compared our drug resistance findings against standard care testing results and high-fidelity sequencing data obtained through unique molecular identifier (UMI) labeling.
RESULTS: PHG identified two major LEN-resistance mutations in one participant, confirmed by an additional independent sequencing run. Across three study cohorts, PHG consistently detected the same drug resistance mutations as standard care genotyping and high-fidelity UMI-labeling in most tested specimens. PHG's 10% limit of detection minimized false positives and enabled identification of minority variants less than 20% frequency, pointing to underdiagnosis of drug resistance in clinical care. Furthermore, PHG identified linked cross-class resistance mutations, confirmed by UMI-labeling, including linked cross-resistance in a participant who reported use of long-acting cabotegravir (CAB) and rilpivirine (RPV). We also observed multi-year persistence of linked cross-class resistance mutations.
CONCLUSIONS: PHG demonstrates significant improvements over standard care HIV genotyping, offering deeper insights into LEN-resistance, minority variants, and cross-class resistance using a low-cost high-throughput portable sequencing technology and publicly available cloud computing.
Additional Links: PMID-40826811
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40826811,
year = {2025},
author = {Park, SY and Takayama, C and Ryu, J and Sattah, M and Badii, Z and Kim, JW and Shafer, RW and Gorbach, PM and Lee, HY},
title = {Design and evaluation of next-generation HIV genotyping for detection of resistance mutations to 28 antiretroviral drugs across five major classes including lenacapavir.},
journal = {Clinical infectious diseases : an official publication of the Infectious Diseases Society of America},
volume = {},
number = {},
pages = {},
doi = {10.1093/cid/ciaf458},
pmid = {40826811},
issn = {1537-6591},
abstract = {BACKGROUND: The emergence and spread of HIV drug-resistant strains present a major barrier to effective lifelong Antiretroviral Therapy (ART). The anticipated rise in long-acting subcutaneous lenacapavir (LEN) use, along with the increased risk of transmitted resistance and Pre-Exposure Prophylaxis (PrEP)-associated resistance, underscores the urgent need for advanced genotyping methods to enhance clinical care and prevention strategies.
METHODS: We developed the Portable HIV Genotyping (PHG) platform which combines cost-effective next-generation sequencing with cloud computing to screen for resistance to 28 antiretroviral drugs across five major classes, including LEN. We analyzed three study cohorts and compared our drug resistance findings against standard care testing results and high-fidelity sequencing data obtained through unique molecular identifier (UMI) labeling.
RESULTS: PHG identified two major LEN-resistance mutations in one participant, confirmed by an additional independent sequencing run. Across three study cohorts, PHG consistently detected the same drug resistance mutations as standard care genotyping and high-fidelity UMI-labeling in most tested specimens. PHG's 10% limit of detection minimized false positives and enabled identification of minority variants less than 20% frequency, pointing to underdiagnosis of drug resistance in clinical care. Furthermore, PHG identified linked cross-class resistance mutations, confirmed by UMI-labeling, including linked cross-resistance in a participant who reported use of long-acting cabotegravir (CAB) and rilpivirine (RPV). We also observed multi-year persistence of linked cross-class resistance mutations.
CONCLUSIONS: PHG demonstrates significant improvements over standard care HIV genotyping, offering deeper insights into LEN-resistance, minority variants, and cross-class resistance using a low-cost high-throughput portable sequencing technology and publicly available cloud computing.},
}
RevDate: 2025-08-17
A Blockchain-Based Secure Data Transaction and Privacy Preservation Scheme in IoT System.
Sensors (Basel, Switzerland), 25(15):.
With the explosive growth of Internet of Things (IoT) devices, massive amounts of heterogeneous data are continuously generated. However, IoT data transactions and sharing face multiple challenges such as limited device resources, untrustworthy network environment, highly sensitive user privacy, and serious data silos. How to achieve fine-grained access control and privacy protection for massive devices while ensuring secure and reliable data circulation has become a key issue that needs to be urgently addressed in the current IoT field. To address the above challenges, this paper proposes a blockchain-based data transaction and privacy protection framework. First, the framework builds a multi-layer security architecture that integrates blockchain and IPFS and adapts to the "end-edge-cloud" collaborative characteristics of IoT. Secondly, a data sharing mechanism that takes into account both access control and interest balance is designed. On the one hand, the mechanism uses attribute-based encryption (ABE) technology to achieve dynamic and fine-grained access control for massive heterogeneous IoT devices; on the other hand, it introduces a game theory-driven dynamic pricing model to effectively balance the interests of both data supply and demand. Finally, in response to the needs of confidential analysis of IoT data, a secure computing scheme based on CKKS fully homomorphic encryption is proposed, which supports efficient statistical analysis of encrypted sensor data without leaking privacy. Security analysis and experimental results show that this scheme is secure under standard cryptographic assumptions and can effectively resist common attacks in the IoT environment. Prototype system testing verifies the functional completeness and performance feasibility of the scheme, providing a complete and effective technical solution to address the challenges of data integrity, verifiable transactions, and fine-grained access control, while mitigating the reliance on a trusted central authority in IoT data sharing.
Additional Links: PMID-40808017
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40808017,
year = {2025},
author = {Wu, J and Bian, Z and Gao, H and Wang, Y},
title = {A Blockchain-Based Secure Data Transaction and Privacy Preservation Scheme in IoT System.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {15},
pages = {},
pmid = {40808017},
issn = {1424-8220},
support = {Grant 809 No.U24B20146//National Natural Science Foundation of China/ ; No.M21034//Beijing Natural Science/ ; GrantNo.2020YFB1005503//the National Key Research and Development Plan in China/ ; },
abstract = {With the explosive growth of Internet of Things (IoT) devices, massive amounts of heterogeneous data are continuously generated. However, IoT data transactions and sharing face multiple challenges such as limited device resources, untrustworthy network environment, highly sensitive user privacy, and serious data silos. How to achieve fine-grained access control and privacy protection for massive devices while ensuring secure and reliable data circulation has become a key issue that needs to be urgently addressed in the current IoT field. To address the above challenges, this paper proposes a blockchain-based data transaction and privacy protection framework. First, the framework builds a multi-layer security architecture that integrates blockchain and IPFS and adapts to the "end-edge-cloud" collaborative characteristics of IoT. Secondly, a data sharing mechanism that takes into account both access control and interest balance is designed. On the one hand, the mechanism uses attribute-based encryption (ABE) technology to achieve dynamic and fine-grained access control for massive heterogeneous IoT devices; on the other hand, it introduces a game theory-driven dynamic pricing model to effectively balance the interests of both data supply and demand. Finally, in response to the needs of confidential analysis of IoT data, a secure computing scheme based on CKKS fully homomorphic encryption is proposed, which supports efficient statistical analysis of encrypted sensor data without leaking privacy. Security analysis and experimental results show that this scheme is secure under standard cryptographic assumptions and can effectively resist common attacks in the IoT environment. Prototype system testing verifies the functional completeness and performance feasibility of the scheme, providing a complete and effective technical solution to address the challenges of data integrity, verifiable transactions, and fine-grained access control, while mitigating the reliance on a trusted central authority in IoT data sharing.},
}
RevDate: 2025-08-18
Extrachromosomal DNA associates with poor survival across a broad spectrum of childhood solid tumors.
medRxiv : the preprint server for health sciences.
Circular extrachromosomal DNA (ecDNA) is a common form of oncogene amplification in aggressive cancers. The frequency and diversity of ecDNA has been catalogued in adult and some childhood cancers; however, its role in most pediatric cancers is not well-understood. To address this gap, we accessed large pediatric cancer genomics data repositories and identified ecDNA from whole genome sequencing data using cloud computing. This retrospective cohort comprises 3,631 solid tumor biopsies from 2,968 patients covering all major childhood solid tumor types. Aggressive tumor types had particularly high incidences of ecDNA. Pediatric patients whose tumors harbored extrachromosomal DNA had significantly poorer five-year overall survival than children whose tumors contained only chromosomal amplifications. We catalogue known and potentially novel oncogenes recurrently amplified on ecDNA and show that ecDNA often evolves during disease progression. These results highlight patient populations that could potentially benefit from future ecDNA-directed therapies. To facilitate discovery, we developed an interactive catalogue of ecDNA in childhood cancer at https://ccdi-ecdna.org/.
Additional Links: PMID-40778132
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40778132,
year = {2025},
author = {Chapman, OS and Sridhar, S and Chow, EY and Kenkre, R and Kirkland, J and Dutta, A and Wang, S and Zhang, W and Brown, M and Luebeck, J and Lo, YY and Rodriguez-Fos, E and Henssen, AG and Okonechnikov, K and Ghasemi, DR and Pajtler, KW and Kawauchi, D and Bafna, V and Paul, M and Yip, K and Mesirov, JP and Chavez, L},
title = {Extrachromosomal DNA associates with poor survival across a broad spectrum of childhood solid tumors.},
journal = {medRxiv : the preprint server for health sciences},
volume = {},
number = {},
pages = {},
pmid = {40778132},
support = {U01 CA253547/CA/NCI NIH HHS/United States ; U24 CA264379/CA/NCI NIH HHS/United States ; T15 LM011271/LM/NLM NIH HHS/United States ; P30 CA051008/CA/NCI NIH HHS/United States ; R21 NS120075/NS/NINDS NIH HHS/United States ; F31 CA271777/CA/NCI NIH HHS/United States ; R21 NS130137/NS/NINDS NIH HHS/United States ; P30 CA030199/CA/NCI NIH HHS/United States ; R01 NS132780/NS/NINDS NIH HHS/United States ; },
abstract = {Circular extrachromosomal DNA (ecDNA) is a common form of oncogene amplification in aggressive cancers. The frequency and diversity of ecDNA has been catalogued in adult and some childhood cancers; however, its role in most pediatric cancers is not well-understood. To address this gap, we accessed large pediatric cancer genomics data repositories and identified ecDNA from whole genome sequencing data using cloud computing. This retrospective cohort comprises 3,631 solid tumor biopsies from 2,968 patients covering all major childhood solid tumor types. Aggressive tumor types had particularly high incidences of ecDNA. Pediatric patients whose tumors harbored extrachromosomal DNA had significantly poorer five-year overall survival than children whose tumors contained only chromosomal amplifications. We catalogue known and potentially novel oncogenes recurrently amplified on ecDNA and show that ecDNA often evolves during disease progression. These results highlight patient populations that could potentially benefit from future ecDNA-directed therapies. To facilitate discovery, we developed an interactive catalogue of ecDNA in childhood cancer at https://ccdi-ecdna.org/.},
}
RevDate: 2023-11-10
Rapid Response to Drive COVID-19 Research in a Learning Health Care System: Rationale and Design of the Houston Methodist COVID-19 Surveillance and Outcomes Registry (CURATOR).
JMIR medical informatics, 9(2):e26773.
BACKGROUND: The COVID-19 pandemic has exacerbated the challenges of meaningful health care digitization. The need for rapid yet validated decision-making requires robust data infrastructure. Organizations with a focus on learning health care (LHC) systems tend to adapt better to rapidly evolving data needs. Few studies have demonstrated a successful implementation of data digitization principles in an LHC context across health care systems during the COVID-19 pandemic.
OBJECTIVE: We share our experience and provide a framework for assembling and organizing multidisciplinary resources, structuring and regulating research needs, and developing a single source of truth (SSoT) for COVID-19 research by applying fundamental principles of health care digitization, in the context of LHC systems across a complex health care organization.
METHODS: Houston Methodist (HM) comprises eight tertiary care hospitals and an expansive primary care network across Greater Houston, Texas. During the early phase of the pandemic, institutional leadership envisioned the need to streamline COVID-19 research and established the retrospective research task force (RRTF). We describe an account of the structure, functioning, and productivity of the RRTF. We further elucidate the technical and structural details of a comprehensive data repository-the HM COVID-19 Surveillance and Outcomes Registry (CURATOR). We particularly highlight how CURATOR conforms to standard health care digitization principles in the LHC context.
RESULTS: The HM COVID-19 RRTF comprises expertise in epidemiology, health systems, clinical domains, data sciences, information technology, and research regulation. The RRTF initially convened in March 2020 to prioritize and streamline COVID-19 observational research; to date, it has reviewed over 60 protocols and made recommendations to the institutional review board (IRB). The RRTF also established the charter for CURATOR, which in itself was IRB-approved in April 2020. CURATOR is a relational structured query language database that is directly populated with data from electronic health records, via largely automated extract, transform, and load procedures. The CURATOR design enables longitudinal tracking of COVID-19 cases and controls before and after COVID-19 testing. CURATOR has been set up following the SSoT principle and is harmonized across other COVID-19 data sources. CURATOR eliminates data silos by leveraging unique and disparate big data sources for COVID-19 research and provides a platform to capitalize on institutional investment in cloud computing. It currently hosts deeply phenotyped sociodemographic, clinical, and outcomes data of approximately 200,000 individuals tested for COVID-19. It supports more than 30 IRB-approved protocols across several clinical domains and has generated numerous publications from its core and associated data sources.
CONCLUSIONS: A data-driven decision-making strategy is paramount to the success of health care organizations. Investment in cross-disciplinary expertise, health care technology, and leadership commitment are key ingredients to foster an LHC system. Such systems can mitigate the effects of ongoing and future health care catastrophes by providing timely and validated decision support.
Additional Links: PMID-33544692
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid33544692,
year = {2021},
author = {Vahidy, F and Jones, SL and Tano, ME and Nicolas, JC and Khan, OA and Meeks, JR and Pan, AP and Menser, T and Sasangohar, F and Naufal, G and Sostman, D and Nasir, K and Kash, BA},
title = {Rapid Response to Drive COVID-19 Research in a Learning Health Care System: Rationale and Design of the Houston Methodist COVID-19 Surveillance and Outcomes Registry (CURATOR).},
journal = {JMIR medical informatics},
volume = {9},
number = {2},
pages = {e26773},
pmid = {33544692},
issn = {2291-9694},
abstract = {BACKGROUND: The COVID-19 pandemic has exacerbated the challenges of meaningful health care digitization. The need for rapid yet validated decision-making requires robust data infrastructure. Organizations with a focus on learning health care (LHC) systems tend to adapt better to rapidly evolving data needs. Few studies have demonstrated a successful implementation of data digitization principles in an LHC context across health care systems during the COVID-19 pandemic.
OBJECTIVE: We share our experience and provide a framework for assembling and organizing multidisciplinary resources, structuring and regulating research needs, and developing a single source of truth (SSoT) for COVID-19 research by applying fundamental principles of health care digitization, in the context of LHC systems across a complex health care organization.
METHODS: Houston Methodist (HM) comprises eight tertiary care hospitals and an expansive primary care network across Greater Houston, Texas. During the early phase of the pandemic, institutional leadership envisioned the need to streamline COVID-19 research and established the retrospective research task force (RRTF). We describe an account of the structure, functioning, and productivity of the RRTF. We further elucidate the technical and structural details of a comprehensive data repository-the HM COVID-19 Surveillance and Outcomes Registry (CURATOR). We particularly highlight how CURATOR conforms to standard health care digitization principles in the LHC context.
RESULTS: The HM COVID-19 RRTF comprises expertise in epidemiology, health systems, clinical domains, data sciences, information technology, and research regulation. The RRTF initially convened in March 2020 to prioritize and streamline COVID-19 observational research; to date, it has reviewed over 60 protocols and made recommendations to the institutional review board (IRB). The RRTF also established the charter for CURATOR, which in itself was IRB-approved in April 2020. CURATOR is a relational structured query language database that is directly populated with data from electronic health records, via largely automated extract, transform, and load procedures. The CURATOR design enables longitudinal tracking of COVID-19 cases and controls before and after COVID-19 testing. CURATOR has been set up following the SSoT principle and is harmonized across other COVID-19 data sources. CURATOR eliminates data silos by leveraging unique and disparate big data sources for COVID-19 research and provides a platform to capitalize on institutional investment in cloud computing. It currently hosts deeply phenotyped sociodemographic, clinical, and outcomes data of approximately 200,000 individuals tested for COVID-19. It supports more than 30 IRB-approved protocols across several clinical domains and has generated numerous publications from its core and associated data sources.
CONCLUSIONS: A data-driven decision-making strategy is paramount to the success of health care organizations. Investment in cross-disciplinary expertise, health care technology, and leadership commitment are key ingredients to foster an LHC system. Such systems can mitigate the effects of ongoing and future health care catastrophes by providing timely and validated decision support.},
}
RevDate: 2023-11-11
CmpDate: 2016-10-17
Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.
GigaScience, 5:12.
Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.
Additional Links: PMID-26918190
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid26918190,
year = {2016},
author = {Dinov, ID},
title = {Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.},
journal = {GigaScience},
volume = {5},
number = {},
pages = {12},
pmid = {26918190},
issn = {2047-217X},
support = {P30 AG053760/AG/NIA NIH HHS/United States ; U54 EB020406/EB/NIBIB NIH HHS/United States ; P20 NR015331/NR/NINR NIH HHS/United States ; P30 DK089503/DK/NIDDK NIH HHS/United States ; P50 NS091856/NS/NINDS NIH HHS/United States ; },
mesh = {Computational Biology/*methods ; Delivery of Health Care/*statistics & numerical data ; Humans ; *Models, Theoretical ; Neuroimaging/statistics & numerical data ; Principal Component Analysis ; Reproducibility of Results ; *Software ; },
abstract = {Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Computational Biology/*methods
Delivery of Health Care/*statistics & numerical data
Humans
*Models, Theoretical
Neuroimaging/statistics & numerical data
Principal Component Analysis
Reproducibility of Results
*Software
RevDate: 2025-08-18
Light use efficiency (LUE) based bimonthly gross primary productivity (GPP) for global grasslands at 30 m spatial resolution (2000-2022).
PeerJ, 13:e19774 pii:19774.
The article describes production of a high spatial resolution (30 m) bimonthly light use efficiency (LUE) based gross primary productivity (GPP) data set representing grasslands for the period 2000 to 2022. The data set is based on using reconstructed global complete consistent bimonthly Landsat archive (400TB of data), combined with 1 km MOD11A1 temperature data and 1° CERES Photosynthetically Active Radiation (PAR). First, the LUE model was implemented by taking the biome-specific productivity factor (maximum LUE parameter) as a global constant, producing a global bimonthly (uncalibrated) productivity data for the complete land mask. Second, the GPP 30 m bimonthly maps were derived for the global grassland annual predictions and calibrating the values based on the maximum LUE factor of 0.86 gCm[-2]d[-1]MJ[-1]. The results of validation of the produced GPP estimates based on 527 eddy covariance flux towers show an R-square between 0.48-0.71 and root mean square error (RMSE) below ~2.3 gCm[-2]d[-1] for all land cover classes. Using a total of 92 flux towers located in grasslands, the validation of the GPP product calibrated for the grassland biome revealed an R-square between 0.51-0.70 and an RMSE smaller than ~2 gCm[-2]d[-1]. The final time-series of maps (uncalibrated and grassland GPP) are available as bimonthly (daily estimates in units of gCm[-2]d[-1]) and annual (daily average accumulated by 365 days in units of gCm[-2]yr[-1]) in Cloud-Optimized GeoTIFF (~23TB in size) as open data (CC-BY license). The recommended uses of data include: trend analysis e.g., to determine where are the largest losses in GPP and which could be an indicator of potential land degradation, crop yield mapping and for modeling GHG fluxes at finer spatial resolution. Produced maps are available via SpatioTemporal Asset Catalog (http://stac.openlandmap.org) and Google Earth Engine.
Additional Links: PMID-40821997
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40821997,
year = {2025},
author = {Isik, MS and Parente, L and Consoli, D and Sloat, L and Mesquita, VV and Ferreira, LG and Sabbatini, S and Stanimirova, R and Teles, NM and Robinson, N and Costa Junior, C and Hengl, T},
title = {Light use efficiency (LUE) based bimonthly gross primary productivity (GPP) for global grasslands at 30 m spatial resolution (2000-2022).},
journal = {PeerJ},
volume = {13},
number = {},
pages = {e19774},
doi = {10.7717/peerj.19774},
pmid = {40821997},
issn = {2167-8359},
abstract = {The article describes production of a high spatial resolution (30 m) bimonthly light use efficiency (LUE) based gross primary productivity (GPP) data set representing grasslands for the period 2000 to 2022. The data set is based on using reconstructed global complete consistent bimonthly Landsat archive (400TB of data), combined with 1 km MOD11A1 temperature data and 1° CERES Photosynthetically Active Radiation (PAR). First, the LUE model was implemented by taking the biome-specific productivity factor (maximum LUE parameter) as a global constant, producing a global bimonthly (uncalibrated) productivity data for the complete land mask. Second, the GPP 30 m bimonthly maps were derived for the global grassland annual predictions and calibrating the values based on the maximum LUE factor of 0.86 gCm[-2]d[-1]MJ[-1]. The results of validation of the produced GPP estimates based on 527 eddy covariance flux towers show an R-square between 0.48-0.71 and root mean square error (RMSE) below ~2.3 gCm[-2]d[-1] for all land cover classes. Using a total of 92 flux towers located in grasslands, the validation of the GPP product calibrated for the grassland biome revealed an R-square between 0.51-0.70 and an RMSE smaller than ~2 gCm[-2]d[-1]. The final time-series of maps (uncalibrated and grassland GPP) are available as bimonthly (daily estimates in units of gCm[-2]d[-1]) and annual (daily average accumulated by 365 days in units of gCm[-2]yr[-1]) in Cloud-Optimized GeoTIFF (~23TB in size) as open data (CC-BY license). The recommended uses of data include: trend analysis e.g., to determine where are the largest losses in GPP and which could be an indicator of potential land degradation, crop yield mapping and for modeling GHG fluxes at finer spatial resolution. Produced maps are available via SpatioTemporal Asset Catalog (http://stac.openlandmap.org) and Google Earth Engine.},
}
RevDate: 2025-08-17
Enhancing cloud security and deduplication efficiency with SALIGP and cryptographic authentication.
Scientific reports, 15(1):30112.
Cloud computing enables data storage and application deployment over the internet, offering benefits such as mobility, resource pooling, and scalability. However, it also presents major challenges, particularly in managing shared resources, ensuring data security, and controlling distributed applications in the absence of centralized oversight. One key issue is data duplication, which leads to inefficient storage, increased costs, and potential privacy and security risks. To address these challenges, this study proposes a post-quantum mechanism that enhances both cloud security and deduplication efficiency. The proposed SALIGP method leverages Genetic Programming and a Geometric Approach, integrating Bloom Filters for efficient duplication detection. The Cryptographic Deduplication Authentication Scheme (CDAS) is introduced, which utilizes blockchain technology to securely store and retrieve files, while ensuring that encrypted access is limited to authorized users. This dual-layered approach effectively resolves the issue of redundant data in dynamic, distributed cloud environments. Experimental results demonstrate that the proposed method significantly reduces computation and communication times at various network nodes, particularly in key generation and group operations. Encrypting user data prior to outsourcing ensures enhanced privacy protection during the deduplication process. Overall, the proposed system leads to substantial improvements in cloud data security, reliability, and storage efficiency, offering a scalable and secure framework for modern cloud computing environments.
Additional Links: PMID-40820020
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40820020,
year = {2025},
author = {Periasamy, JK and Prabhakar, S and Vanathi, A and Yu, L},
title = {Enhancing cloud security and deduplication efficiency with SALIGP and cryptographic authentication.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {30112},
pmid = {40820020},
issn = {2045-2322},
abstract = {Cloud computing enables data storage and application deployment over the internet, offering benefits such as mobility, resource pooling, and scalability. However, it also presents major challenges, particularly in managing shared resources, ensuring data security, and controlling distributed applications in the absence of centralized oversight. One key issue is data duplication, which leads to inefficient storage, increased costs, and potential privacy and security risks. To address these challenges, this study proposes a post-quantum mechanism that enhances both cloud security and deduplication efficiency. The proposed SALIGP method leverages Genetic Programming and a Geometric Approach, integrating Bloom Filters for efficient duplication detection. The Cryptographic Deduplication Authentication Scheme (CDAS) is introduced, which utilizes blockchain technology to securely store and retrieve files, while ensuring that encrypted access is limited to authorized users. This dual-layered approach effectively resolves the issue of redundant data in dynamic, distributed cloud environments. Experimental results demonstrate that the proposed method significantly reduces computation and communication times at various network nodes, particularly in key generation and group operations. Encrypting user data prior to outsourcing ensures enhanced privacy protection during the deduplication process. Overall, the proposed system leads to substantial improvements in cloud data security, reliability, and storage efficiency, offering a scalable and secure framework for modern cloud computing environments.},
}
RevDate: 2025-08-18
Long-term Land Cover Dataset of the Mongolian Plateau Based on Multi-source Data and Rich Sample Annotations.
Scientific data, 12(1):1434.
The Mongolian Plateau (MP), with its unique geographical landscape and nomadic cultural features, is vital to regional ecological security and sustainable development in North Asia. Existing global land cover products often lack the classification specificity and temporal continuity required for MP-specific studies, particularly for grassland and bare area subtypes. To address this gap, a new land cover classification was designed for MP, which includes 14 categories: forests, shrubs, meadows, real steppes, dry steppes, desert steppes, wetlands, water, croplands, built-up land, barren land, desert, sand, and ice. Using machine learning and cloud computing, the novel dataset spanning the period of 1990-2020. Random Forest algorithm was employed to integrate training samples with multisource features for landcover classification, and a two-step Random Forest classification strategy to improve detail land cover results in transition regions. This process involved accurately annotating 64,345 sample points within a gridded framework. The resulting dataset achieved an overall accuracy of 83.6%. This land cover product and its approach has potential for application in vast arid and semi-arid areas.
Additional Links: PMID-40817336
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40817336,
year = {2025},
author = {Wang, J and Li, K and Han, T and Sun, Y and Hong, M and Shao, Y and Sun, Z and Liu, M and Li, F and Su, Y and Jia, Q and Liu, Y and Liu, J and Jiang, J and Ochir, A and Davaasuren, D and Xu, M and Sun, Y and Huang, S and Zou, W and Sun, F},
title = {Long-term Land Cover Dataset of the Mongolian Plateau Based on Multi-source Data and Rich Sample Annotations.},
journal = {Scientific data},
volume = {12},
number = {1},
pages = {1434},
pmid = {40817336},
issn = {2052-4463},
abstract = {The Mongolian Plateau (MP), with its unique geographical landscape and nomadic cultural features, is vital to regional ecological security and sustainable development in North Asia. Existing global land cover products often lack the classification specificity and temporal continuity required for MP-specific studies, particularly for grassland and bare area subtypes. To address this gap, a new land cover classification was designed for MP, which includes 14 categories: forests, shrubs, meadows, real steppes, dry steppes, desert steppes, wetlands, water, croplands, built-up land, barren land, desert, sand, and ice. Using machine learning and cloud computing, the novel dataset spanning the period of 1990-2020. Random Forest algorithm was employed to integrate training samples with multisource features for landcover classification, and a two-step Random Forest classification strategy to improve detail land cover results in transition regions. This process involved accurately annotating 64,345 sample points within a gridded framework. The resulting dataset achieved an overall accuracy of 83.6%. This land cover product and its approach has potential for application in vast arid and semi-arid areas.},
}
RevDate: 2025-08-14
GenMPI: Cluster Scalable Variant Calling for Short/Long Reads Sequencing Data.
IEEE transactions on computational biology and bioinformatics, PP: [Epub ahead of print].
Rapid technological advancements in sequencing technologies allow producing cost effective and high volume sequencing data. Processing this data for real-time clinical diagnosis is potentially time-consuming if done on a single computing node. This work presents a complete variant calling workflow, implemented using the Message Passing Interface (MPI) to leverage the benefits of high bandwidth interconnects. This solution (GenMPI) is portable and flexible, meaning it can be deployed to any private or public cluster/cloud infrastructure. Any alignment or variant calling application can be used with minimal adaptation. To achieve high performance, compressed input data can be streamed in parallel to alignment applications while uncompressed data can use internal file seek functionality to eliminate the bottleneck of streaming input data from a single node. Alignment output can be directly stored in multiple chromosome-specific SAM files or a single SAM file. After alignment, a distributed queue using MPI RMA (Remote Memory Access) atomic operations is created for sorting, indexing, marking of duplicates (if necessary) and variant calling applications. We ensure the accuracy of variants as compared to the original single node methods. We also show that for 300x coverage data, alignment scales almost linearly up to 64 nodes (8192 CPU cores). Overall, this work outperforms existing big data based workflows by a factor of two and is almost 20% faster than other MPI-based implementations for alignment without any extra memory overheads. Sorting, indexing, duplicate removal and variant calling is also scalable up to 8 nodes cluster. For pair-end short-reads (Illumina) data, we integrated the BWA-MEM aligner and three variant callers (GATK HaplotypeCaller, DeepVariant and Octopus), while for long-reads data, we integrated the Minimap2 aligner and three different variant callers (DeepVariant, DeepVariant with WhatsHap for phasing (PacBio) and Clair3 (ONT)). All codes and scripts are available at: https://github.com/abs-tudelft/gen-mpi.
Additional Links: PMID-40811182
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40811182,
year = {2025},
author = {Ahmad, T and Schuchart, J and Al Ars, Z and Niethammer, C and Gracia, J and Hofstee, HP},
title = {GenMPI: Cluster Scalable Variant Calling for Short/Long Reads Sequencing Data.},
journal = {IEEE transactions on computational biology and bioinformatics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TCBBIO.2025.3595409},
pmid = {40811182},
issn = {2998-4165},
abstract = {Rapid technological advancements in sequencing technologies allow producing cost effective and high volume sequencing data. Processing this data for real-time clinical diagnosis is potentially time-consuming if done on a single computing node. This work presents a complete variant calling workflow, implemented using the Message Passing Interface (MPI) to leverage the benefits of high bandwidth interconnects. This solution (GenMPI) is portable and flexible, meaning it can be deployed to any private or public cluster/cloud infrastructure. Any alignment or variant calling application can be used with minimal adaptation. To achieve high performance, compressed input data can be streamed in parallel to alignment applications while uncompressed data can use internal file seek functionality to eliminate the bottleneck of streaming input data from a single node. Alignment output can be directly stored in multiple chromosome-specific SAM files or a single SAM file. After alignment, a distributed queue using MPI RMA (Remote Memory Access) atomic operations is created for sorting, indexing, marking of duplicates (if necessary) and variant calling applications. We ensure the accuracy of variants as compared to the original single node methods. We also show that for 300x coverage data, alignment scales almost linearly up to 64 nodes (8192 CPU cores). Overall, this work outperforms existing big data based workflows by a factor of two and is almost 20% faster than other MPI-based implementations for alignment without any extra memory overheads. Sorting, indexing, duplicate removal and variant calling is also scalable up to 8 nodes cluster. For pair-end short-reads (Illumina) data, we integrated the BWA-MEM aligner and three variant callers (GATK HaplotypeCaller, DeepVariant and Octopus), while for long-reads data, we integrated the Minimap2 aligner and three different variant callers (DeepVariant, DeepVariant with WhatsHap for phasing (PacBio) and Clair3 (ONT)). All codes and scripts are available at: https://github.com/abs-tudelft/gen-mpi.},
}
RevDate: 2025-08-17
Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence.
Sensors (Basel, Switzerland), 25(15):.
Unmanned platforms such as unmanned aerial vehicles, unmanned ground vehicles, and autonomous underwater vehicles often face challenges of data, device, and model heterogeneity when performing collaborative data processing tasks. Existing research does not simultaneously address issues from these three aspects. To address this issue, this study designs an unmanned platform cluster architecture inspired by the cloud-edge-end model. This architecture integrates federated learning for privacy protection, leverages the advantages of distributed model training, and utilizes edge computing's near-source data processing capabilities. Additionally, this paper proposes a federated edge intelligence method (DSIA-FEI), which comprises two key components. Based on traditional federated learning, a data sharing mechanism is introduced, in which data is extracted from edge-side platforms and placed into a data sharing platform to form a public dataset. At the beginning of model training, random sampling is conducted from the public dataset and distributed to each unmanned platform, so as to mitigate the impact of data distribution heterogeneity and class imbalance during collaborative data processing in unmanned platforms. Moreover, an intelligent model aggregation strategy based on similarity measurement and loss gradient is developed. This strategy maps heterogeneous model parameters to a unified space via hierarchical parameter alignment, and evaluates the similarity between local and global models of edge devices in real-time, along with the loss gradient, to select the optimal model for global aggregation, reducing the influence of device and model heterogeneity on cooperative learning of unmanned platform swarms. This study carried out extensive validation on multiple datasets, and the experimental results showed that the accuracy of the DSIA-FEI proposed in this paper reaches 0.91, 0.91, 0.88, and 0.87 on the FEMNIST, FEAIR, EuroSAT, and RSSCN7 datasets, respectively, which is more than 10% higher than the baseline method. In addition, the number of communication rounds is reduced by more than 40%, which is better than the existing mainstream methods, and the effectiveness of the proposed method is verified.
Additional Links: PMID-40807915
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40807915,
year = {2025},
author = {Liu, S and Shan, N and Bao, X and Xu, X},
title = {Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {15},
pages = {},
pmid = {40807915},
issn = {1424-8220},
support = {62102436; 62406337; 2021CFB279; 202250E060; 614221722050603//National Natural Science Foundation of China under; Natural Science Foundation of Hubei Province; National Defence Science and Technology Key Laboratory Foundation Project/ ; },
abstract = {Unmanned platforms such as unmanned aerial vehicles, unmanned ground vehicles, and autonomous underwater vehicles often face challenges of data, device, and model heterogeneity when performing collaborative data processing tasks. Existing research does not simultaneously address issues from these three aspects. To address this issue, this study designs an unmanned platform cluster architecture inspired by the cloud-edge-end model. This architecture integrates federated learning for privacy protection, leverages the advantages of distributed model training, and utilizes edge computing's near-source data processing capabilities. Additionally, this paper proposes a federated edge intelligence method (DSIA-FEI), which comprises two key components. Based on traditional federated learning, a data sharing mechanism is introduced, in which data is extracted from edge-side platforms and placed into a data sharing platform to form a public dataset. At the beginning of model training, random sampling is conducted from the public dataset and distributed to each unmanned platform, so as to mitigate the impact of data distribution heterogeneity and class imbalance during collaborative data processing in unmanned platforms. Moreover, an intelligent model aggregation strategy based on similarity measurement and loss gradient is developed. This strategy maps heterogeneous model parameters to a unified space via hierarchical parameter alignment, and evaluates the similarity between local and global models of edge devices in real-time, along with the loss gradient, to select the optimal model for global aggregation, reducing the influence of device and model heterogeneity on cooperative learning of unmanned platform swarms. This study carried out extensive validation on multiple datasets, and the experimental results showed that the accuracy of the DSIA-FEI proposed in this paper reaches 0.91, 0.91, 0.88, and 0.87 on the FEMNIST, FEAIR, EuroSAT, and RSSCN7 datasets, respectively, which is more than 10% higher than the baseline method. In addition, the number of communication rounds is reduced by more than 40%, which is better than the existing mainstream methods, and the effectiveness of the proposed method is verified.},
}
RevDate: 2025-08-17
An Effective QoS-Aware Hybrid Optimization Approach for Workflow Scheduling in Cloud Computing.
Sensors (Basel, Switzerland), 25(15):.
Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling algorithms to find optimal or near-optimal task-to-VM allocation solutions that meet users' specific QoS requirements still remains an open area of research. In this paper, we propose a hybrid QoS-aware workflow scheduling algorithm named HLWOA to address the problem of simultaneously minimizing the completion time and execution cost of workflow scheduling in cloud computing. First, the workflow scheduling problem in cloud computing is modeled as a multi-objective optimization problem. Then, based on the heterogeneous earliest finish time (HEFT) heuristic optimization algorithm, tasks are reverse topologically sorted and assigned to virtual machines with the earliest finish time to construct an initial workflow task scheduling sequence. Furthermore, an improved Whale Optimization Algorithm (WOA) based on Lévy flight is proposed. The output solution of HEFT is used as one of the initial population solutions in WOA to accelerate the convergence speed of the algorithm. Subsequently, a Lévy flight search strategy is introduced in the iterative optimization phase to avoid the algorithm falling into local optimal solutions. The proposed HLWOA is evaluated on the WorkflowSim platform using real-world scientific workflows (Cybershake and Montage) with different task scales (100 and 1000). Experimental results demonstrate that HLWOA outperforms HEFT, HEPGA, and standard WOA in both makespan and cost, with normalized fitness values consistently ranking first.
Additional Links: PMID-40807868
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40807868,
year = {2025},
author = {Cui, M and Wang, Y},
title = {An Effective QoS-Aware Hybrid Optimization Approach for Workflow Scheduling in Cloud Computing.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {15},
pages = {},
pmid = {40807868},
issn = {1424-8220},
abstract = {Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling algorithms to find optimal or near-optimal task-to-VM allocation solutions that meet users' specific QoS requirements still remains an open area of research. In this paper, we propose a hybrid QoS-aware workflow scheduling algorithm named HLWOA to address the problem of simultaneously minimizing the completion time and execution cost of workflow scheduling in cloud computing. First, the workflow scheduling problem in cloud computing is modeled as a multi-objective optimization problem. Then, based on the heterogeneous earliest finish time (HEFT) heuristic optimization algorithm, tasks are reverse topologically sorted and assigned to virtual machines with the earliest finish time to construct an initial workflow task scheduling sequence. Furthermore, an improved Whale Optimization Algorithm (WOA) based on Lévy flight is proposed. The output solution of HEFT is used as one of the initial population solutions in WOA to accelerate the convergence speed of the algorithm. Subsequently, a Lévy flight search strategy is introduced in the iterative optimization phase to avoid the algorithm falling into local optimal solutions. The proposed HLWOA is evaluated on the WorkflowSim platform using real-world scientific workflows (Cybershake and Montage) with different task scales (100 and 1000). Experimental results demonstrate that HLWOA outperforms HEFT, HEPGA, and standard WOA in both makespan and cost, with normalized fitness values consistently ranking first.},
}
RevDate: 2025-08-17
Low-Latency Edge-Enabled Digital Twin System for Multi-Robot Collision Avoidance and Remote Control.
Sensors (Basel, Switzerland), 25(15):.
This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently limited by excessive network latency, bandwidth bottlenecks, and a lack of predictive decision-making, thus constraining their effectiveness in real-time multi-agent systems. To overcome these limitations, we propose a novel framework that seamlessly integrates edge computing with digital twin (DT) technology. By performing localized preprocessing at the edge, the system extracts semantically rich features from raw sensor data streams, reducing the transmission overhead of the original data. This shift from raw data to feature-based communication significantly alleviates network congestion and enhances system responsiveness. The DT layer leverages these extracted features to maintain high-fidelity synchronization with physical robots and to execute predictive models for proactive collision avoidance. To empirically validate the framework, a real-world testbed was developed, and extensive experiments were conducted with multiple mobile robots. The results revealed a substantial reduction in collision rates when DT was deployed, and further improvements were observed with E-DTNCS integration due to significantly reduced latency. These findings confirm the system's enhanced responsiveness and its effectiveness in handling real-time control tasks. The proposed framework demonstrates the potential of combining edge intelligence with DT-driven control in advancing the reliability, scalability, and real-time performance of multi-robot systems for industrial automation and mission-critical cyber-physical applications.
Additional Links: PMID-40807829
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40807829,
year = {2025},
author = {Mtowe, DP and Long, L and Kim, DM},
title = {Low-Latency Edge-Enabled Digital Twin System for Multi-Robot Collision Avoidance and Remote Control.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {15},
pages = {},
pmid = {40807829},
issn = {1424-8220},
support = {2019R1G1A1100699//National Research Foundation of Korea/ ; },
abstract = {This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently limited by excessive network latency, bandwidth bottlenecks, and a lack of predictive decision-making, thus constraining their effectiveness in real-time multi-agent systems. To overcome these limitations, we propose a novel framework that seamlessly integrates edge computing with digital twin (DT) technology. By performing localized preprocessing at the edge, the system extracts semantically rich features from raw sensor data streams, reducing the transmission overhead of the original data. This shift from raw data to feature-based communication significantly alleviates network congestion and enhances system responsiveness. The DT layer leverages these extracted features to maintain high-fidelity synchronization with physical robots and to execute predictive models for proactive collision avoidance. To empirically validate the framework, a real-world testbed was developed, and extensive experiments were conducted with multiple mobile robots. The results revealed a substantial reduction in collision rates when DT was deployed, and further improvements were observed with E-DTNCS integration due to significantly reduced latency. These findings confirm the system's enhanced responsiveness and its effectiveness in handling real-time control tasks. The proposed framework demonstrates the potential of combining edge intelligence with DT-driven control in advancing the reliability, scalability, and real-time performance of multi-robot systems for industrial automation and mission-critical cyber-physical applications.},
}
RevDate: 2025-08-17
Medical Data over Sound-CardiaWhisper Concept.
Sensors (Basel, Switzerland), 25(15):.
Data over sound (DoS) is an established technique that has experienced a resurgence in recent years, finding applications in areas such as contactless payments, device pairing, authentication, presence detection, toys, and offline data transfer. This study introduces CardiaWhisper, a system that extends the DoS concept to the medical domain by using a medical data-over-sound (MDoS) framework. CardiaWhisper integrates wearable biomedical sensors with home care systems, edge or IoT gateways, and telemedical networks or cloud platforms. Using a transmitter device, vital signs such as ECG (electrocardiogram) signals, PPG (photoplethysmogram) signals, RR (respiratory rate), and ACC (acceleration/movement) are sensed, conditioned, encoded, and acoustically transmitted to a nearby receiver-typically a smartphone, tablet, or other gadget-and can be further relayed to edge and cloud infrastructures. As a case study, this paper presents the real-time transmission and processing of ECG signals. The transmitter integrates an ECG sensing module, an encoder (either a PLL-based FM modulator chip or a microcontroller), and a sound emitter in the form of a standard piezoelectric speaker. The receiver, in the form of a mobile phone, tablet, or desktop computer, captures the acoustic signal via its built-in microphone and executes software routines to decode the data. It then enables a range of control and visualization functions for both local and remote users. Emphasis is placed on describing the system architecture and its key components, as well as the software methodologies used for signal decoding on the receiver side, where several algorithms are implemented using open-source, platform-independent technologies, such as JavaScript, HTML, and CSS. While the main focus is on the transmission of analog data, digital data transmission is also illustrated. The CardiaWhisper system is evaluated across several performance parameters, including functionality, complexity, speed, noise immunity, power consumption, range, and cost-efficiency. Quantitative measurements of the signal-to-noise ratio (SNR) were performed in various realistic indoor scenarios, including different distances, obstacles, and noise environments. Preliminary results are presented, along with a discussion of design challenges, limitations, and feasible applications. Our experience demonstrates that CardiaWhisper provides a low-power, eco-friendly alternative to traditional RF or Bluetooth-based medical wearables in various applications.
Additional Links: PMID-40807741
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40807741,
year = {2025},
author = {Stojanović, R and Đurković, J and Vukmirović, M and Babić, B and Miranović, V and Škraba, A},
title = {Medical Data over Sound-CardiaWhisper Concept.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {15},
pages = {},
pmid = {40807741},
issn = {1424-8220},
support = {P5-0018//The Slovenian Research and Innovation Agency/ ; C3330-22-953012//Ministry of Higher Education, Science and Innovation of the Republic of Slovenia/ ; 3330-22-3515//Ministry of Higher Education, Science, and Innovation of the Republic of Slovenia/ ; VI ARCA//European Union Interreg/ ; },
abstract = {Data over sound (DoS) is an established technique that has experienced a resurgence in recent years, finding applications in areas such as contactless payments, device pairing, authentication, presence detection, toys, and offline data transfer. This study introduces CardiaWhisper, a system that extends the DoS concept to the medical domain by using a medical data-over-sound (MDoS) framework. CardiaWhisper integrates wearable biomedical sensors with home care systems, edge or IoT gateways, and telemedical networks or cloud platforms. Using a transmitter device, vital signs such as ECG (electrocardiogram) signals, PPG (photoplethysmogram) signals, RR (respiratory rate), and ACC (acceleration/movement) are sensed, conditioned, encoded, and acoustically transmitted to a nearby receiver-typically a smartphone, tablet, or other gadget-and can be further relayed to edge and cloud infrastructures. As a case study, this paper presents the real-time transmission and processing of ECG signals. The transmitter integrates an ECG sensing module, an encoder (either a PLL-based FM modulator chip or a microcontroller), and a sound emitter in the form of a standard piezoelectric speaker. The receiver, in the form of a mobile phone, tablet, or desktop computer, captures the acoustic signal via its built-in microphone and executes software routines to decode the data. It then enables a range of control and visualization functions for both local and remote users. Emphasis is placed on describing the system architecture and its key components, as well as the software methodologies used for signal decoding on the receiver side, where several algorithms are implemented using open-source, platform-independent technologies, such as JavaScript, HTML, and CSS. While the main focus is on the transmission of analog data, digital data transmission is also illustrated. The CardiaWhisper system is evaluated across several performance parameters, including functionality, complexity, speed, noise immunity, power consumption, range, and cost-efficiency. Quantitative measurements of the signal-to-noise ratio (SNR) were performed in various realistic indoor scenarios, including different distances, obstacles, and noise environments. Preliminary results are presented, along with a discussion of design challenges, limitations, and feasible applications. Our experience demonstrates that CardiaWhisper provides a low-power, eco-friendly alternative to traditional RF or Bluetooth-based medical wearables in various applications.},
}
RevDate: 2025-08-16
Efficient workflow scheduling using an improved multi-objective memetic algorithm in cloud-edge-end collaborative framework.
Scientific reports, 15(1):29754 pii:10.1038/s41598-025-08691-y.
With the rapid advancement of large-scale model technologies, AI agent frameworks built on foundation models have become a central focus of artificial-intelligence research. In cloud-edge-end collaborative computing frameworks, efficient workflow scheduling is essential to reducing both server energy consumption and overall makespan. This paper addresses this challenge by proposing an Improved Multi-Objective Memetic Algorithm (IMOMA) that simultaneously optimizes energy consumption and makespan. First, a multi-objective optimization model incorporating task execution constraints and priority constraints is developed, and complexity analysis confirms its NP-hard nature. Second, the IMOMA algorithm enhances population diversity through dynamic opposition-based learning, introduces local search operators tailored for bi-objective optimization, and maintains Pareto optimal solutions via an elite archive. A dynamic selection mechanism based on operator historical performance and an adaptive local search triggering strategy effectively balance global exploration and local exploitation capabilities. Experimental results on 10 standard datasets demonstrate that IMOMA achieves improvements of 93%, 7%, and 19% in hypervolume and 58%, 1%, and 23% in inverted generational distance compared to MOPSO, NSGA-II, and SPEA-II algorithms. Additionally, ablation experiments reveal the influence mechanisms of scheduling strategies, server configurations, and other constraints on optimization objectives, providing an engineering-oriented solution for real-world cloud-edge-end collaborative scenarios.
Additional Links: PMID-40804083
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40804083,
year = {2025},
author = {Cui, G and Zhang, W and Xu, W and Bao, H},
title = {Efficient workflow scheduling using an improved multi-objective memetic algorithm in cloud-edge-end collaborative framework.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {29754},
doi = {10.1038/s41598-025-08691-y},
pmid = {40804083},
issn = {2045-2322},
support = {No. 2023C01042//Zhejiang Provincial Science and Technology Program/ ; },
abstract = {With the rapid advancement of large-scale model technologies, AI agent frameworks built on foundation models have become a central focus of artificial-intelligence research. In cloud-edge-end collaborative computing frameworks, efficient workflow scheduling is essential to reducing both server energy consumption and overall makespan. This paper addresses this challenge by proposing an Improved Multi-Objective Memetic Algorithm (IMOMA) that simultaneously optimizes energy consumption and makespan. First, a multi-objective optimization model incorporating task execution constraints and priority constraints is developed, and complexity analysis confirms its NP-hard nature. Second, the IMOMA algorithm enhances population diversity through dynamic opposition-based learning, introduces local search operators tailored for bi-objective optimization, and maintains Pareto optimal solutions via an elite archive. A dynamic selection mechanism based on operator historical performance and an adaptive local search triggering strategy effectively balance global exploration and local exploitation capabilities. Experimental results on 10 standard datasets demonstrate that IMOMA achieves improvements of 93%, 7%, and 19% in hypervolume and 58%, 1%, and 23% in inverted generational distance compared to MOPSO, NSGA-II, and SPEA-II algorithms. Additionally, ablation experiments reveal the influence mechanisms of scheduling strategies, server configurations, and other constraints on optimization objectives, providing an engineering-oriented solution for real-world cloud-edge-end collaborative scenarios.},
}
RevDate: 2025-08-16
Intelligent deep learning for human activity recognition in individuals with disabilities using sensor based IoT and edge cloud continuum.
Scientific reports, 15(1):29640.
Aging is associated with a reduction in the capability to perform activities of everyday routine and a decline in physical activity, which affects physical and mental health. A human activity recognition (HAR) system can be a valuable tool for elderly individuals or patients, as it monitors their activities and detects any significant changes in behavior or events. When integrated with the Internet of Things (IoT), this system enables individuals to live independently while ensuring their well-being. The IoT-edge-cloud framework enhances this by processing data as close to the source as possible-either on edge devices or directly on the IoT devices themselves. However, the massive number of activity constellations and sensor configurations make the HAR problem challenging to solve deterministically. HAR involves collecting sensor data to classify diverse human activities and is a rapidly growing field. It presents valuable insights into the health, fitness, and overall wellness of individuals outside of hospital settings. Therefore, the machine learning (ML) model is mostly used for the growth of the HAR system to discover the models of human activity from the sensor data. In this manuscript, an Intelligent Deep Learning Technique for Human Activity Recognition of Persons with Disabilities using the Sensors Technology (IDLTHAR-PDST) technique is proposed. The purpose of the IDLTHAR-PDST technique is to efficiently recognize and interpret activities by leveraging sensor technology within a smart IoT-Edge-Cloud continuum. Initially, the IDLTHAR-PDST technique utilizes min-max normalization-based data pre-processing model to optimize sensor data consistency and enhance model performance. For feature subset selection, the enhanced honey badger algorithm (EHBA) model is used to effectively reduce dimensionality while retaining critical activity-related features. Finally, the deep belief network (DBN) model is employed for HAR. To exhibit the improved performance of the existing IDLTHAR-PDST model, a comprehensive simulation study is accomplished. The performance validation of the IDLTHAR-PDST model portrayed a superior accuracy value of 98.75% over existing techniques.
Additional Links: PMID-40804077
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40804077,
year = {2025},
author = {Maray, M},
title = {Intelligent deep learning for human activity recognition in individuals with disabilities using sensor based IoT and edge cloud continuum.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {29640},
pmid = {40804077},
issn = {2045-2322},
support = {KSRG-2024-269//King Salman Center for Disability Research/ ; },
abstract = {Aging is associated with a reduction in the capability to perform activities of everyday routine and a decline in physical activity, which affects physical and mental health. A human activity recognition (HAR) system can be a valuable tool for elderly individuals or patients, as it monitors their activities and detects any significant changes in behavior or events. When integrated with the Internet of Things (IoT), this system enables individuals to live independently while ensuring their well-being. The IoT-edge-cloud framework enhances this by processing data as close to the source as possible-either on edge devices or directly on the IoT devices themselves. However, the massive number of activity constellations and sensor configurations make the HAR problem challenging to solve deterministically. HAR involves collecting sensor data to classify diverse human activities and is a rapidly growing field. It presents valuable insights into the health, fitness, and overall wellness of individuals outside of hospital settings. Therefore, the machine learning (ML) model is mostly used for the growth of the HAR system to discover the models of human activity from the sensor data. In this manuscript, an Intelligent Deep Learning Technique for Human Activity Recognition of Persons with Disabilities using the Sensors Technology (IDLTHAR-PDST) technique is proposed. The purpose of the IDLTHAR-PDST technique is to efficiently recognize and interpret activities by leveraging sensor technology within a smart IoT-Edge-Cloud continuum. Initially, the IDLTHAR-PDST technique utilizes min-max normalization-based data pre-processing model to optimize sensor data consistency and enhance model performance. For feature subset selection, the enhanced honey badger algorithm (EHBA) model is used to effectively reduce dimensionality while retaining critical activity-related features. Finally, the deep belief network (DBN) model is employed for HAR. To exhibit the improved performance of the existing IDLTHAR-PDST model, a comprehensive simulation study is accomplished. The performance validation of the IDLTHAR-PDST model portrayed a superior accuracy value of 98.75% over existing techniques.},
}
RevDate: 2025-08-16
Evaluating prompt and data perturbation sensitivity in large language models for radiology reports classification.
JAMIA open, 8(4):ooaf073.
OBJECTIVES: Large language models (LLMs) offer potential in natural language processing tasks in healthcare. Due to the need for high accuracy, understanding their limitations is essential. The purpose of this study was to evaluate the performance of LLMs in classifying radiology reports for the presence of pulmonary embolism (PE) under various conditions, including different prompt designs and data perturbations.
MATERIALS AND METHODS: In this retrospective, institutional review board approved study, we evaluated 3 Google's LLMs including Gemini-1.5-Pro, Gemini-1.5-Flash-001, and Gemini-1.5-Flash-002, in classifying 11 999 pulmonary CT angiography radiology reports for PE. Ground truth labels were determined by concordance between a computer vision-based PE detection (CVPED) algorithm and multiple LLM runs under various configurations. Discrepancies between algorithms' classifications were aggregated and manually reviewed. We evaluated the effects of prompt design, data perturbations, and repeated analyses across geographic cloud regions. Performance metrics were calculated.
RESULTS: Of 11 999 reports, 1296 (10.8%) were PE-positive. Accuracy across LLMs ranged between 0.953 and 0.996. The highest recall rate for a prompt modified after a review of the misclassified cases (up to 0.997). Few-shot prompting improved recall (up to 0.99), while chain-of-thought generally degraded performance. Gemini-1.5-Flash-002 demonstrated the highest robustness against data perturbations. Geographic cloud region variability was minimal for Gemini-1.5+-Pro, while the Flash models showed stable performance.
DISCUSSION AND CONCLUSION: LLMs demonstrated high performance in classifying radiology reports, though results varied with prompt design and data quality. These findings underscore the need for systematic evaluation and validation of LLMs for clinical applications, particularly in high-stakes scenarios.
Additional Links: PMID-40799928
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40799928,
year = {2025},
author = {Sorin, V and Collins, JD and Bratt, AK and Kusmirek, JE and Mugu, VK and Kline, TL and Butler, CL and Wood, NG and Cook, CJ and Korfiatis, P},
title = {Evaluating prompt and data perturbation sensitivity in large language models for radiology reports classification.},
journal = {JAMIA open},
volume = {8},
number = {4},
pages = {ooaf073},
pmid = {40799928},
issn = {2574-2531},
abstract = {OBJECTIVES: Large language models (LLMs) offer potential in natural language processing tasks in healthcare. Due to the need for high accuracy, understanding their limitations is essential. The purpose of this study was to evaluate the performance of LLMs in classifying radiology reports for the presence of pulmonary embolism (PE) under various conditions, including different prompt designs and data perturbations.
MATERIALS AND METHODS: In this retrospective, institutional review board approved study, we evaluated 3 Google's LLMs including Gemini-1.5-Pro, Gemini-1.5-Flash-001, and Gemini-1.5-Flash-002, in classifying 11 999 pulmonary CT angiography radiology reports for PE. Ground truth labels were determined by concordance between a computer vision-based PE detection (CVPED) algorithm and multiple LLM runs under various configurations. Discrepancies between algorithms' classifications were aggregated and manually reviewed. We evaluated the effects of prompt design, data perturbations, and repeated analyses across geographic cloud regions. Performance metrics were calculated.
RESULTS: Of 11 999 reports, 1296 (10.8%) were PE-positive. Accuracy across LLMs ranged between 0.953 and 0.996. The highest recall rate for a prompt modified after a review of the misclassified cases (up to 0.997). Few-shot prompting improved recall (up to 0.99), while chain-of-thought generally degraded performance. Gemini-1.5-Flash-002 demonstrated the highest robustness against data perturbations. Geographic cloud region variability was minimal for Gemini-1.5+-Pro, while the Flash models showed stable performance.
DISCUSSION AND CONCLUSION: LLMs demonstrated high performance in classifying radiology reports, though results varied with prompt design and data quality. These findings underscore the need for systematic evaluation and validation of LLMs for clinical applications, particularly in high-stakes scenarios.},
}
RevDate: 2025-08-14
Sustainable E-Health: Energy-Efficient Tiny AI for Epileptic Seizure Detection via EEG.
Biomedical engineering and computational biology, 16:11795972241283101.
Tiny Artificial Intelligence (Tiny AI) is transforming resource-constrained embedded systems, particularly in e-health applications, by introducing a shift in Tiny Machine Learning (TinyML) and its integration with the Internet of Things (IoT). Unlike conventional machine learning (ML), which demands substantial processing power, TinyML strategically delegates processing requirements to the cloud infrastructure, allowing lightweight models to run on embedded devices. This study aimed to (i) Develop a TinyML workflow that details the steps for model creation and deployment in resource-constrained environments and (ii) apply the workflow to e-health applications for the real-time detection of epileptic seizures using electroencephalography (EEG) data. The methodology employs a dataset of 4097 EEG recordings per patient, each 23.5 seconds long, from 500 patients, to develop a robust and resilient model. The model was deployed using TinyML on microcontrollers tailored to hardware with limited resources. TensorFlow Lite (TFLite) efficiently runs ML models on small devices, such wearables. Simulation outcomes demonstrated significant performance, particularly in predicting epileptic seizures, with the ExtraTrees Classifier achieving a notable 99.6% Area Under the Curve (AUC) on the validation set. Because of its superior performance, the ExtraTrees Classifier was selected as the preferred model. For the optimized TinyML model, the accuracy remained practically unchanged, whereas inference time was significantly reduced. Additionally, the converted model had a smaller size of 256 KB, approximately ten times smaller, making it suitable for microcontrollers with a capacity of no more than 1 MB. These findings highlight the potential of TinyML to significantly enhance healthcare applications by enabling real-time, energy-efficient decision-making directly on local devices. This is especially valuable in scenarios with limited computing resources or during emergencies, as it reduces latency, ensures privacy, and operates without reliance on cloud infrastructure. Moreover, by reducing the size of training datasets needed, TinyML helps lower overall costs and minimizes the risk of overfitting, making it an even more cost-effective and reliable solution for healthcare innovations.
Additional Links: PMID-40792199
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40792199,
year = {2025},
author = {Hizem, M and Aoueileyine, MO and Belhaouari, SB and El Omri, A and Bouallegue, R},
title = {Sustainable E-Health: Energy-Efficient Tiny AI for Epileptic Seizure Detection via EEG.},
journal = {Biomedical engineering and computational biology},
volume = {16},
number = {},
pages = {11795972241283101},
pmid = {40792199},
issn = {1179-5972},
abstract = {Tiny Artificial Intelligence (Tiny AI) is transforming resource-constrained embedded systems, particularly in e-health applications, by introducing a shift in Tiny Machine Learning (TinyML) and its integration with the Internet of Things (IoT). Unlike conventional machine learning (ML), which demands substantial processing power, TinyML strategically delegates processing requirements to the cloud infrastructure, allowing lightweight models to run on embedded devices. This study aimed to (i) Develop a TinyML workflow that details the steps for model creation and deployment in resource-constrained environments and (ii) apply the workflow to e-health applications for the real-time detection of epileptic seizures using electroencephalography (EEG) data. The methodology employs a dataset of 4097 EEG recordings per patient, each 23.5 seconds long, from 500 patients, to develop a robust and resilient model. The model was deployed using TinyML on microcontrollers tailored to hardware with limited resources. TensorFlow Lite (TFLite) efficiently runs ML models on small devices, such wearables. Simulation outcomes demonstrated significant performance, particularly in predicting epileptic seizures, with the ExtraTrees Classifier achieving a notable 99.6% Area Under the Curve (AUC) on the validation set. Because of its superior performance, the ExtraTrees Classifier was selected as the preferred model. For the optimized TinyML model, the accuracy remained practically unchanged, whereas inference time was significantly reduced. Additionally, the converted model had a smaller size of 256 KB, approximately ten times smaller, making it suitable for microcontrollers with a capacity of no more than 1 MB. These findings highlight the potential of TinyML to significantly enhance healthcare applications by enabling real-time, energy-efficient decision-making directly on local devices. This is especially valuable in scenarios with limited computing resources or during emergencies, as it reduces latency, ensures privacy, and operates without reliance on cloud infrastructure. Moreover, by reducing the size of training datasets needed, TinyML helps lower overall costs and minimizes the risk of overfitting, making it an even more cost-effective and reliable solution for healthcare innovations.},
}
RevDate: 2025-08-12
Teaching Python with team-based learning: using cloud-based notebooks for interactive coding education.
FEBS open bio [Epub ahead of print].
Computer programming and bioinformatics are increasingly essential topics in life sciences research, facilitating the analysis of large and complex 'omics' datasets. However, they remain challenging for students without a background in mathematics or computing. To address challenges in teaching programming within biomedical education, this study integrates team-based learning (TBL) with cloud-hosted interactive Python notebooks, targeting enhanced student engagement, understanding, and collaboration in bioinformatics in two Masters level classes with 28 biomedical students in total. Four interactive notebooks covering Python basics and practical bioinformatics applications-ranging from data manipulation to multi-omics analysis-were developed. Hosted on github and integrated with Google Colaboratory, these notebooks ensured equal access and eliminated technical barriers for students with varied computing setups. During the TBL session, students were highly engaged with the notebooks, which led to a greater interest in Python and increased confidence in using bioinformatics tools. Feedback highlighted the value of TBL and interactive notebooks in enriching the learning experience, while also identifying a need for further development in bioinformatics research skills. Although more validity evidence is needed in future studies, this blended, cloud-based TBL approach effectively made bioinformatics education more accessible and engaging, suggesting its potential for enhancing computational training across life sciences.
Additional Links: PMID-40790850
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40790850,
year = {2025},
author = {Osório, NS and Garma, LD},
title = {Teaching Python with team-based learning: using cloud-based notebooks for interactive coding education.},
journal = {FEBS open bio},
volume = {},
number = {},
pages = {},
doi = {10.1002/2211-5463.70097},
pmid = {40790850},
issn = {2211-5463},
abstract = {Computer programming and bioinformatics are increasingly essential topics in life sciences research, facilitating the analysis of large and complex 'omics' datasets. However, they remain challenging for students without a background in mathematics or computing. To address challenges in teaching programming within biomedical education, this study integrates team-based learning (TBL) with cloud-hosted interactive Python notebooks, targeting enhanced student engagement, understanding, and collaboration in bioinformatics in two Masters level classes with 28 biomedical students in total. Four interactive notebooks covering Python basics and practical bioinformatics applications-ranging from data manipulation to multi-omics analysis-were developed. Hosted on github and integrated with Google Colaboratory, these notebooks ensured equal access and eliminated technical barriers for students with varied computing setups. During the TBL session, students were highly engaged with the notebooks, which led to a greater interest in Python and increased confidence in using bioinformatics tools. Feedback highlighted the value of TBL and interactive notebooks in enriching the learning experience, while also identifying a need for further development in bioinformatics research skills. Although more validity evidence is needed in future studies, this blended, cloud-based TBL approach effectively made bioinformatics education more accessible and engaging, suggesting its potential for enhancing computational training across life sciences.},
}
RevDate: 2025-08-13
CmpDate: 2025-08-11
Deep learning neural network development for the classification of bacteriocin sequences produced by lactic acid bacteria.
F1000Research, 13:981.
BACKGROUND: The rise of antibiotic-resistant bacteria presents a pressing need for exploring new natural compounds with innovative mechanisms to replace existing antibiotics. Bacteriocins offer promising alternatives for developing therapeutic and preventive strategies in livestock, aquaculture, and human health. Specifically, those produced by LAB are recognized as GRAS and QPS. This study aims to develop a deep learning model specifically designed to classify bacteriocins by their LAB origin, using interpretable k-mer features and embedding vectors to enable applications in antimicrobial discover.
METHODS: We developed a deep learning neural network for binary classification of bacteriocin amino acid sequences (BacLAB vs. Non-BacLAB). Features were extracted using k-mers (k=3,5,7,15,20) and vector embeddings (EV). Ten feature combinations were tested (e.g., EV, EV+5-mers+7-mers). Sequences were filtered by length (50-2000 AA) to ensure uniformity, and class balance was maintained (24,964 BacLAB vs. 25,000 Non-BacLAB). The model was trained on Google Colab, demonstrating computational accessibility without specialized hardware.
RESULTS: The '5-mers+7-mers+EV' group achieved the best performance, with k-fold cross-validation (k=30) showing: 9.90% loss, 90.14% accuracy, 90.30% precision, 90.10% recall and F1 score. Folder 22 stood out with 8.50% loss, 91.47% accuracy, and 91.00% precision, recall, and F1 score. Five sets of 100 LAB-specific k-mers were identified, revealing conserved motifs. Despite high accuracy, sequence length variation (50-2000 AA) may bias k-mer representation, favoring longer sequences. Additionally, experimental validation is required to confirm the biological activity of predicted bacteriocins. These aspects highlight directions for future research.
CONCLUSIONS: The model developed in this study achieved consistent results with those seen in the reviewed literature. It outperformed some studies by 3-10%. Its implementation in resource-limited settings is feasible via cloud platforms like Google Colab. The identified k-mers could guide the design of synthetic antimicrobials, pending further in vitro validation.
Additional Links: PMID-40786095
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40786095,
year = {2024},
author = {González, LL and Arias-Serrano, I and Villalba-Meneses, F and Navas-Boada, P and Cruz-Varela, J},
title = {Deep learning neural network development for the classification of bacteriocin sequences produced by lactic acid bacteria.},
journal = {F1000Research},
volume = {13},
number = {},
pages = {981},
pmid = {40786095},
issn = {2046-1402},
mesh = {*Bacteriocins/classification/chemistry/biosynthesis ; *Deep Learning ; *Lactobacillales/metabolism ; *Neural Networks, Computer ; Amino Acid Sequence ; },
abstract = {BACKGROUND: The rise of antibiotic-resistant bacteria presents a pressing need for exploring new natural compounds with innovative mechanisms to replace existing antibiotics. Bacteriocins offer promising alternatives for developing therapeutic and preventive strategies in livestock, aquaculture, and human health. Specifically, those produced by LAB are recognized as GRAS and QPS. This study aims to develop a deep learning model specifically designed to classify bacteriocins by their LAB origin, using interpretable k-mer features and embedding vectors to enable applications in antimicrobial discover.
METHODS: We developed a deep learning neural network for binary classification of bacteriocin amino acid sequences (BacLAB vs. Non-BacLAB). Features were extracted using k-mers (k=3,5,7,15,20) and vector embeddings (EV). Ten feature combinations were tested (e.g., EV, EV+5-mers+7-mers). Sequences were filtered by length (50-2000 AA) to ensure uniformity, and class balance was maintained (24,964 BacLAB vs. 25,000 Non-BacLAB). The model was trained on Google Colab, demonstrating computational accessibility without specialized hardware.
RESULTS: The '5-mers+7-mers+EV' group achieved the best performance, with k-fold cross-validation (k=30) showing: 9.90% loss, 90.14% accuracy, 90.30% precision, 90.10% recall and F1 score. Folder 22 stood out with 8.50% loss, 91.47% accuracy, and 91.00% precision, recall, and F1 score. Five sets of 100 LAB-specific k-mers were identified, revealing conserved motifs. Despite high accuracy, sequence length variation (50-2000 AA) may bias k-mer representation, favoring longer sequences. Additionally, experimental validation is required to confirm the biological activity of predicted bacteriocins. These aspects highlight directions for future research.
CONCLUSIONS: The model developed in this study achieved consistent results with those seen in the reviewed literature. It outperformed some studies by 3-10%. Its implementation in resource-limited settings is feasible via cloud platforms like Google Colab. The identified k-mers could guide the design of synthetic antimicrobials, pending further in vitro validation.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Bacteriocins/classification/chemistry/biosynthesis
*Deep Learning
*Lactobacillales/metabolism
*Neural Networks, Computer
Amino Acid Sequence
RevDate: 2025-08-13
Vehicle-to-everything decision optimization and cloud control based on deep reinforcement learning.
Scientific reports, 15(1):29160.
To address the challenges of decision optimization and road segment hazard assessment within complex traffic environments, and to enhance the safety and responsiveness of autonomous driving, a Vehicle-to-Everything (V2X) decision framework is proposed. This framework is structured into three modules: vehicle perception, decision-making, and execution. The vehicle perception module integrates sensor fusion techniques to capture real-time environmental data, employing deep neural networks to extract essential information. In the decision-making module, deep reinforcement learning algorithms are applied to optimize decision processes by maximizing expected rewards. Meanwhile, the road segment hazard classification module, utilizing both historical traffic data and real-time perception information, adopts a hazard evaluation model to classify road conditions automatically, providing real-time feedback to guide vehicle decision-making. Furthermore, an autonomous driving cloud control platform is designed, augmenting decision-making capabilities through centralized computing resources, enabling large-scale data analysis, and facilitating collaborative optimization. Experimental evaluations conducted within simulation environments and utilizing the KITTI dataset demonstrate that the proposed V2X decision optimization method substantially outperforms conventional decision algorithms. Vehicle decision accuracy increased by 9.0%, rising from 89.2 to 98.2%. Additionally, the response time of the cloud control system decreased from 178 ms to 127 ms, marking a reduction of 28.7%, which significantly enhances decision efficiency and real-time performance. The introduction of the road segment hazard classification model also results in a hazard assessment accuracy of 99.5%, maintaining over 95% accuracy even in high-density traffic and complex road conditions, thus illustrating strong adaptability. The results highlight the effectiveness of the proposed V2X decision optimization framework and cloud control platform in enhancing the decision quality and safety of autonomous driving systems.
Additional Links: PMID-40783576
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40783576,
year = {2025},
author = {Gao, Z and Liu, D and Zheng, C},
title = {Vehicle-to-everything decision optimization and cloud control based on deep reinforcement learning.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {29160},
pmid = {40783576},
issn = {2045-2322},
abstract = {To address the challenges of decision optimization and road segment hazard assessment within complex traffic environments, and to enhance the safety and responsiveness of autonomous driving, a Vehicle-to-Everything (V2X) decision framework is proposed. This framework is structured into three modules: vehicle perception, decision-making, and execution. The vehicle perception module integrates sensor fusion techniques to capture real-time environmental data, employing deep neural networks to extract essential information. In the decision-making module, deep reinforcement learning algorithms are applied to optimize decision processes by maximizing expected rewards. Meanwhile, the road segment hazard classification module, utilizing both historical traffic data and real-time perception information, adopts a hazard evaluation model to classify road conditions automatically, providing real-time feedback to guide vehicle decision-making. Furthermore, an autonomous driving cloud control platform is designed, augmenting decision-making capabilities through centralized computing resources, enabling large-scale data analysis, and facilitating collaborative optimization. Experimental evaluations conducted within simulation environments and utilizing the KITTI dataset demonstrate that the proposed V2X decision optimization method substantially outperforms conventional decision algorithms. Vehicle decision accuracy increased by 9.0%, rising from 89.2 to 98.2%. Additionally, the response time of the cloud control system decreased from 178 ms to 127 ms, marking a reduction of 28.7%, which significantly enhances decision efficiency and real-time performance. The introduction of the road segment hazard classification model also results in a hazard assessment accuracy of 99.5%, maintaining over 95% accuracy even in high-density traffic and complex road conditions, thus illustrating strong adaptability. The results highlight the effectiveness of the proposed V2X decision optimization framework and cloud control platform in enhancing the decision quality and safety of autonomous driving systems.},
}
RevDate: 2025-08-13
A service-oriented microservice framework for differential privacy-based protection in industrial IoT smart applications.
Scientific reports, 15(1):29230.
The rapid advancement of key technologies such as Artificial Intelligence (AI), the Internet of Things (IoT), and edge-cloud computing has significantly accelerated the transformation toward smart industries across various domains, including finance, manufacturing, and healthcare. Edge and cloud computing offer low-cost, scalable, and on-demand computational resources, enabling service providers to deliver intelligent data analytics and real-time insights to end-users. However, despite their potential, the practical adoption of these technologies faces critical challenges, particularly concerning data privacy and security. AI models, especially in distributed environments, may inadvertently retain and leak sensitive training data, exposing users to privacy risks in the event of malicious attacks. To address these challenges, this study proposes a privacy-preserving, service-oriented microservice architecture tailored for intelligent Industrial IoT (IIoT) applications. The architecture integrates Differential Privacy (DP) mechanisms into the machine learning pipeline to safeguard sensitive information. It supports both centralised and distributed deployments, promoting flexible, scalable, and secure analytics. We developed and evaluated differentially private models, including Radial Basis Function Networks (RBFNs), across a range of privacy budgets (ɛ), using both real-world and synthetic IoT datasets. Experimental evaluations using RBFNs demonstrate that the framework maintains high predictive accuracy (up to 96.72%) with acceptable privacy guarantees for budgets [Formula: see text]. Furthermore, the microservice-based deployment achieves an average latency reduction of 28.4% compared to monolithic baselines. These results confirm the effectiveness and practicality of the proposed architecture in delivering privacy-preserving, efficient, and scalable intelligence for IIoT environments. Additionally, the microservice-based design enhanced computational efficiency and reduced latency through dynamic service orchestration. This research demonstrates the feasibility of deploying robust, privacy-conscious AI services in IIoT environments, paving the way for secure, intelligent, and scalable industrial systems.
Additional Links: PMID-40783426
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40783426,
year = {2025},
author = {Murala, DK and Prasada Rao, KV and Vuyyuru, VA and Assefa, BG},
title = {A service-oriented microservice framework for differential privacy-based protection in industrial IoT smart applications.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {29230},
pmid = {40783426},
issn = {2045-2322},
abstract = {The rapid advancement of key technologies such as Artificial Intelligence (AI), the Internet of Things (IoT), and edge-cloud computing has significantly accelerated the transformation toward smart industries across various domains, including finance, manufacturing, and healthcare. Edge and cloud computing offer low-cost, scalable, and on-demand computational resources, enabling service providers to deliver intelligent data analytics and real-time insights to end-users. However, despite their potential, the practical adoption of these technologies faces critical challenges, particularly concerning data privacy and security. AI models, especially in distributed environments, may inadvertently retain and leak sensitive training data, exposing users to privacy risks in the event of malicious attacks. To address these challenges, this study proposes a privacy-preserving, service-oriented microservice architecture tailored for intelligent Industrial IoT (IIoT) applications. The architecture integrates Differential Privacy (DP) mechanisms into the machine learning pipeline to safeguard sensitive information. It supports both centralised and distributed deployments, promoting flexible, scalable, and secure analytics. We developed and evaluated differentially private models, including Radial Basis Function Networks (RBFNs), across a range of privacy budgets (ɛ), using both real-world and synthetic IoT datasets. Experimental evaluations using RBFNs demonstrate that the framework maintains high predictive accuracy (up to 96.72%) with acceptable privacy guarantees for budgets [Formula: see text]. Furthermore, the microservice-based deployment achieves an average latency reduction of 28.4% compared to monolithic baselines. These results confirm the effectiveness and practicality of the proposed architecture in delivering privacy-preserving, efficient, and scalable intelligence for IIoT environments. Additionally, the microservice-based design enhanced computational efficiency and reduced latency through dynamic service orchestration. This research demonstrates the feasibility of deploying robust, privacy-conscious AI services in IIoT environments, paving the way for secure, intelligent, and scalable industrial systems.},
}
RevDate: 2025-08-12
CmpDate: 2025-08-09
Developing real-time IoT-based public safety alert and emergency response systems.
Scientific reports, 15(1):29056.
This paper presents the design and evaluation of a real-time IoT-based emergency response and public safety alert system tailored for rapid detection, classification, and dissemination of alerts during critical incidents. The proposed architecture combines a distributed network of heterogeneous sensors (e.g., gas, flame, vibration, and biometric), edge computing nodes (Raspberry Pi, ESP32), and cloud platforms (AWS IoT, Firebase) to ensure low-latency and high-availability operations. Communication is facilitated using secure MQTT over TLS, with fallback to LoRa for rural or low-connectivity environments. A prototype was implemented and tested across four emergency scenarios fire, traffic accident, gas leak, and medical distress within a smart city simulation testbed. The system achieved such as consistent alert latency under 450 ms, detection accuracy exceeding 95%, and scalability supporting over 12,000 concurrent devices. A comprehensive comparison against seven state-of-the-art systems confirmed superior performance in latency, reliability (99.1% alert success), and uptime (99.8%). These results underscore the system's potential for deployment in urban, industrial, and infrastructure-vulnerable environments, with future work aimed at incorporating AI-driven prediction and federated learning for cloudless operation.
Additional Links: PMID-40781521
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40781521,
year = {2025},
author = {Zhang, H and Zhang, R and Sun, J},
title = {Developing real-time IoT-based public safety alert and emergency response systems.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {29056},
pmid = {40781521},
issn = {2045-2322},
abstract = {This paper presents the design and evaluation of a real-time IoT-based emergency response and public safety alert system tailored for rapid detection, classification, and dissemination of alerts during critical incidents. The proposed architecture combines a distributed network of heterogeneous sensors (e.g., gas, flame, vibration, and biometric), edge computing nodes (Raspberry Pi, ESP32), and cloud platforms (AWS IoT, Firebase) to ensure low-latency and high-availability operations. Communication is facilitated using secure MQTT over TLS, with fallback to LoRa for rural or low-connectivity environments. A prototype was implemented and tested across four emergency scenarios fire, traffic accident, gas leak, and medical distress within a smart city simulation testbed. The system achieved such as consistent alert latency under 450 ms, detection accuracy exceeding 95%, and scalability supporting over 12,000 concurrent devices. A comprehensive comparison against seven state-of-the-art systems confirmed superior performance in latency, reliability (99.1% alert success), and uptime (99.8%). These results underscore the system's potential for deployment in urban, industrial, and infrastructure-vulnerable environments, with future work aimed at incorporating AI-driven prediction and federated learning for cloudless operation.},
}
RevDate: 2025-08-12
Smart fiber with overprinted patterns to function as chip-like multi-threshold logic switch circuit.
Nature communications, 16(1):7314.
There is a growing demand for precise health management, capable of differentially caring every inch of skin as an on-body network. For which, each network node executes not only multi-physiological sensing, but also in-situ logic computing to save cloud computing power for massive data analysis. Herein, we present a smart fiber with multilayers of overprinted patterns, composed of many small units with 0.3 mm long to function as a one-dimension (1D) array of chip-like multi-threshold logic-switch circuit. Via soft contact of curved surfaces between fiber and ink-droplet, an overprinting method is developed for stacking different layers of patterns with a line width of 75 μm in a staggered way, enabling batch production of circuit units along one long fiber. A smart fiber with high density of >3000 circuit units per meter can be woven with fiber-type sensors to construct a textile-type body-covering network, where each node serves as a computing terminal.
Additional Links: PMID-40781088
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40781088,
year = {2025},
author = {Wei, X and Li, R and Xiang, S and Qin, L and Luo, X and Xue, J and Fan, X},
title = {Smart fiber with overprinted patterns to function as chip-like multi-threshold logic switch circuit.},
journal = {Nature communications},
volume = {16},
number = {1},
pages = {7314},
pmid = {40781088},
issn = {2041-1723},
abstract = {There is a growing demand for precise health management, capable of differentially caring every inch of skin as an on-body network. For which, each network node executes not only multi-physiological sensing, but also in-situ logic computing to save cloud computing power for massive data analysis. Herein, we present a smart fiber with multilayers of overprinted patterns, composed of many small units with 0.3 mm long to function as a one-dimension (1D) array of chip-like multi-threshold logic-switch circuit. Via soft contact of curved surfaces between fiber and ink-droplet, an overprinting method is developed for stacking different layers of patterns with a line width of 75 μm in a staggered way, enabling batch production of circuit units along one long fiber. A smart fiber with high density of >3000 circuit units per meter can be woven with fiber-type sensors to construct a textile-type body-covering network, where each node serves as a computing terminal.},
}
RevDate: 2025-08-11
CmpDate: 2025-08-08
Bioconductor's Computational Ecosystem for Genomic Data Science in Cancer.
Methods in molecular biology (Clifton, N.J.), 2932:1-46.
The Bioconductor project enters its third decade with over two thousand packages for genomic data science, over 100,000 annotation and experiment resources, and a global system for convenient distribution to researchers. Over 60,000 PubMed Central citations and terabytes of content shipped per month attest to the impact of the project on cancer genomic data science. This report provides an overview of cancer genomics resources in Bioconductor. After an overview of Bioconductor project principles, we address exploration of institutionally curated cancer genomics data such as TCGA. We then review genomic annotation and ontology resources relevant to cancer and then briefly survey analytical workflows addressing specific topics in cancer genomics. Concluding sections cover how new software and data resources are brought into the ecosystem and how the project is tackling needs for training of the research workforce. Bioconductor's strategies for supporting methods developers and researchers in cancer genomics are evolving along with experimental and computational technologies. All the tools described in this report are backed by regularly maintained learning resources that can be used locally or in cloud computing environments.
Additional Links: PMID-40779102
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40779102,
year = {2025},
author = {Ramos, M and Shepherd, L and Sheffield, NC and Mahmoud, A and Pagès, H and Wokaty, A and Righelli, D and Risso, D and Davis, S and Oh, S and Waldron, L and Morgan, M and Carey, V},
title = {Bioconductor's Computational Ecosystem for Genomic Data Science in Cancer.},
journal = {Methods in molecular biology (Clifton, N.J.)},
volume = {2932},
number = {},
pages = {1-46},
pmid = {40779102},
issn = {1940-6029},
mesh = {*Neoplasms/genetics ; *Genomics/methods ; Humans ; *Software ; *Computational Biology/methods ; Databases, Genetic ; *Data Science/methods ; },
abstract = {The Bioconductor project enters its third decade with over two thousand packages for genomic data science, over 100,000 annotation and experiment resources, and a global system for convenient distribution to researchers. Over 60,000 PubMed Central citations and terabytes of content shipped per month attest to the impact of the project on cancer genomic data science. This report provides an overview of cancer genomics resources in Bioconductor. After an overview of Bioconductor project principles, we address exploration of institutionally curated cancer genomics data such as TCGA. We then review genomic annotation and ontology resources relevant to cancer and then briefly survey analytical workflows addressing specific topics in cancer genomics. Concluding sections cover how new software and data resources are brought into the ecosystem and how the project is tackling needs for training of the research workforce. Bioconductor's strategies for supporting methods developers and researchers in cancer genomics are evolving along with experimental and computational technologies. All the tools described in this report are backed by regularly maintained learning resources that can be used locally or in cloud computing environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Neoplasms/genetics
*Genomics/methods
Humans
*Software
*Computational Biology/methods
Databases, Genetic
*Data Science/methods
RevDate: 2025-08-10
CmpDate: 2025-08-08
A Review of emergency medical services for stroke.
African health sciences, 24(3):382-392.
In the past decade, Emergency Medical Services have been associated with innovations in technology; the 911 telephone system and two-way radio have developed the notification, scheduling, and response processes. The recent twenty years have witnessed the unparalleled innovation changes of the computer framework. These new frameworks in mobile, social, cloud computing or big data concentrations essentially affect the entire society. In the last ten years, major innovation and strategic improvements have occurred, which will affect the concepts and communication methods of Emergency Medical Service in the future. Emergency Medical Service can treat various diseases in the correct way. For example, Emergency Medical Service personnel's early recognition of stroke performance is an important ideal consideration for patients with stroke patients. Pre-stroke screening tools that have been preliminarily evaluated for sensitivity and specificity are necessary to improve detection rates for the pre-court stroke by Emergency Medical Service experts. This is an excellent time for Emergency Medical Service to play a key role in achieving and transcending vision. The motivation behind this article is to provide extensive investigations and unique opportunities for Emergency Medical Service personnel groups to solve how to improve.
Additional Links: PMID-40777969
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40777969,
year = {2024},
author = {Wu, Y and Li, K and Tang, L and Li, G and Huang, D and Yang, Y and Song, S and Peng, L},
title = {A Review of emergency medical services for stroke.},
journal = {African health sciences},
volume = {24},
number = {3},
pages = {382-392},
pmid = {40777969},
issn = {1729-0503},
mesh = {Humans ; *Emergency Medical Services/organization & administration ; *Stroke/diagnosis/therapy ; },
abstract = {In the past decade, Emergency Medical Services have been associated with innovations in technology; the 911 telephone system and two-way radio have developed the notification, scheduling, and response processes. The recent twenty years have witnessed the unparalleled innovation changes of the computer framework. These new frameworks in mobile, social, cloud computing or big data concentrations essentially affect the entire society. In the last ten years, major innovation and strategic improvements have occurred, which will affect the concepts and communication methods of Emergency Medical Service in the future. Emergency Medical Service can treat various diseases in the correct way. For example, Emergency Medical Service personnel's early recognition of stroke performance is an important ideal consideration for patients with stroke patients. Pre-stroke screening tools that have been preliminarily evaluated for sensitivity and specificity are necessary to improve detection rates for the pre-court stroke by Emergency Medical Service experts. This is an excellent time for Emergency Medical Service to play a key role in achieving and transcending vision. The motivation behind this article is to provide extensive investigations and unique opportunities for Emergency Medical Service personnel groups to solve how to improve.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Emergency Medical Services/organization & administration
*Stroke/diagnosis/therapy
RevDate: 2025-08-08
CmpDate: 2025-08-08
Decoding Sepsis: A Technical Blueprint for an Algorithm-Driven System Architecture.
Studies in health technology and informatics, 329:1970-1971.
This paper presents a scalable, serverless machine learning operations (ML Ops) architecture for near real-time sepsis detection in Emergency Department (ED) waiting rooms. Built on Amazon Web Services (AWS) cloud environment, the system processes HL7 messages via MuleSoft, using Lambda for data handling, and SageMaker for model deployment. Data is stored in Aurora PostgreSQL and visualized in on-premise Tableau™. With 99.7% of HL7 messages successfully processed, the system shows strong performance, though occasional downtime, code set mismatches, and peak execution times reveal areas for optimization.
Additional Links: PMID-40776321
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40776321,
year = {2025},
author = {Safi, A and Shaikh, M and Hoang, MT and Shetty, A and Kabil, G and Wang, AP},
title = {Decoding Sepsis: A Technical Blueprint for an Algorithm-Driven System Architecture.},
journal = {Studies in health technology and informatics},
volume = {329},
number = {},
pages = {1970-1971},
doi = {10.3233/SHTI251304},
pmid = {40776321},
issn = {1879-8365},
mesh = {*Sepsis/diagnosis ; Humans ; *Algorithms ; *Machine Learning ; Emergency Service, Hospital/organization & administration ; Cloud Computing ; },
abstract = {This paper presents a scalable, serverless machine learning operations (ML Ops) architecture for near real-time sepsis detection in Emergency Department (ED) waiting rooms. Built on Amazon Web Services (AWS) cloud environment, the system processes HL7 messages via MuleSoft, using Lambda for data handling, and SageMaker for model deployment. Data is stored in Aurora PostgreSQL and visualized in on-premise Tableau™. With 99.7% of HL7 messages successfully processed, the system shows strong performance, though occasional downtime, code set mismatches, and peak execution times reveal areas for optimization.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Sepsis/diagnosis
Humans
*Algorithms
*Machine Learning
Emergency Service, Hospital/organization & administration
Cloud Computing
RevDate: 2025-08-08
CmpDate: 2025-08-08
Integrating Scalable Analytical Tools and Data Warehouses on Private Cloud.
Studies in health technology and informatics, 329:1584-1585.
This study addresses the need for efficient and scalable data warehouse solutions by integrating on-premises environments with private cloud-based infrastructures. Kubernetes was employed to dynamically generate secure virtual machines, offering users independent environments for data analysis. Performance testing demonstrated query speeds, with 240,000 records extracted from a 301GB dataset in 12.4 seconds. Security measures, using a VPN connection between hospital networks and Google Cloud, allowed the safe use of Google's APIs. This scalable infrastructure can accommodate diverse analytical needs.
Additional Links: PMID-40776130
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40776130,
year = {2025},
author = {Kishimoto, K and Sugiyama, O and Iwao, T and Yakami, M and Nambu, M and Yutani, A and Fukuyama, K and Saito, K and Kuroda, T},
title = {Integrating Scalable Analytical Tools and Data Warehouses on Private Cloud.},
journal = {Studies in health technology and informatics},
volume = {329},
number = {},
pages = {1584-1585},
doi = {10.3233/SHTI251113},
pmid = {40776130},
issn = {1879-8365},
mesh = {*Cloud Computing ; *Data Warehousing/methods ; *Computer Security ; *Information Storage and Retrieval/methods ; *Electronic Health Records/organization & administration ; Systems Integration ; },
abstract = {This study addresses the need for efficient and scalable data warehouse solutions by integrating on-premises environments with private cloud-based infrastructures. Kubernetes was employed to dynamically generate secure virtual machines, offering users independent environments for data analysis. Performance testing demonstrated query speeds, with 240,000 records extracted from a 301GB dataset in 12.4 seconds. Security measures, using a VPN connection between hospital networks and Google Cloud, allowed the safe use of Google's APIs. This scalable infrastructure can accommodate diverse analytical needs.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Data Warehousing/methods
*Computer Security
*Information Storage and Retrieval/methods
*Electronic Health Records/organization & administration
Systems Integration
RevDate: 2025-08-08
CmpDate: 2025-08-08
Building a Learning Health System-Focused Trusted Research Environment for Mental Health.
Studies in health technology and informatics, 329:174-178.
Trusted Research Environments (TREs) are increasingly used as platforms for secure health data research, but they can also be used for implementing research findings or for action-research (researchers supporting health professionals to solve problems with advanced data analytics). Most TREs have been designed to support analysis of well-structured and coded data, however, with much clinical data recorded as unstructured notes, especially in mental health care, there needs to be a greater variety of tools and data management services available for safe research that includes natural language processing and anonymisation of data sources. The Mental Health Research for Innovation Centre (M-RIC), co-hosted by the University of Liverpool and Mersey Care NHS Foundation Trust, has implemented a novel TRE design that incorporates modern data engineering concepts to improve how researchers access a wider variety of linked data and machine learning tools, to be able to both undertake research and then deploy these tools directly into mental health care.
Additional Links: PMID-40775842
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40775842,
year = {2025},
author = {Leeming, G and Hughes, J and Joyce, D and Buchan, I},
title = {Building a Learning Health System-Focused Trusted Research Environment for Mental Health.},
journal = {Studies in health technology and informatics},
volume = {329},
number = {},
pages = {174-178},
doi = {10.3233/SHTI250824},
pmid = {40775842},
issn = {1879-8365},
mesh = {Humans ; *Learning Health System/organization & administration ; Machine Learning ; *Mental Health Services/organization & administration ; *Mental Health ; Natural Language Processing ; },
abstract = {Trusted Research Environments (TREs) are increasingly used as platforms for secure health data research, but they can also be used for implementing research findings or for action-research (researchers supporting health professionals to solve problems with advanced data analytics). Most TREs have been designed to support analysis of well-structured and coded data, however, with much clinical data recorded as unstructured notes, especially in mental health care, there needs to be a greater variety of tools and data management services available for safe research that includes natural language processing and anonymisation of data sources. The Mental Health Research for Innovation Centre (M-RIC), co-hosted by the University of Liverpool and Mersey Care NHS Foundation Trust, has implemented a novel TRE design that incorporates modern data engineering concepts to improve how researchers access a wider variety of linked data and machine learning tools, to be able to both undertake research and then deploy these tools directly into mental health care.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Learning Health System/organization & administration
Machine Learning
*Mental Health Services/organization & administration
*Mental Health
Natural Language Processing
RevDate: 2025-08-16
Enhancing Gen3 for clinical trial time series analytics and data discovery: a data commons framework for NIH clinical trials.
Frontiers in digital health, 7:1570009.
This work presents a framework for enhancing Gen3, an open-source data commons platform, with temporal visualization capabilities for clinical trial research. We describe the technical implementation of cloud-native architecture and integrated visualization tools that enable standardized analytics for longitudinal clinical trial data while adhering to FAIR principles. The enhancement includes Kubernetes-based container orchestration, Kibana-based temporal analytics, and automated ETL pipelines for data harmonization. Technical validation demonstrates reliable handling of varied time-based data structures, while maintaining temporal precision and measurement context. The framework's implementation in NIH HEAL Initiative networks studying chronic pain and substance use disorders showcases its utility for real-time monitoring of longitudinal outcomes across multiple trials. This adaptation provides a model for research networks seeking to enhance their data commons capabilities while ensuring findable, accessible, interoperable, and reusable clinical trial data.
Additional Links: PMID-40771358
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40771358,
year = {2025},
author = {Adams, MCB and Griffin, C and Adams, H and Bryant, S and Hurley, RW and Topaloglu, U},
title = {Enhancing Gen3 for clinical trial time series analytics and data discovery: a data commons framework for NIH clinical trials.},
journal = {Frontiers in digital health},
volume = {7},
number = {},
pages = {1570009},
pmid = {40771358},
issn = {2673-253X},
support = {R24 DA055306/DA/NIDA NIH HHS/United States ; R24 DA058606/DA/NIDA NIH HHS/United States ; R25 DA061740/DA/NIDA NIH HHS/United States ; U24 DA057612/DA/NIDA NIH HHS/United States ; },
abstract = {This work presents a framework for enhancing Gen3, an open-source data commons platform, with temporal visualization capabilities for clinical trial research. We describe the technical implementation of cloud-native architecture and integrated visualization tools that enable standardized analytics for longitudinal clinical trial data while adhering to FAIR principles. The enhancement includes Kubernetes-based container orchestration, Kibana-based temporal analytics, and automated ETL pipelines for data harmonization. Technical validation demonstrates reliable handling of varied time-based data structures, while maintaining temporal precision and measurement context. The framework's implementation in NIH HEAL Initiative networks studying chronic pain and substance use disorders showcases its utility for real-time monitoring of longitudinal outcomes across multiple trials. This adaptation provides a model for research networks seeking to enhance their data commons capabilities while ensuring findable, accessible, interoperable, and reusable clinical trial data.},
}
RevDate: 2025-08-07
CmpDate: 2025-08-04
Internet of things enabled deep learning monitoring system for realtime performance metrics and athlete feedback in college sports.
Scientific reports, 15(1):28405.
This study presents an Internet of Things (IoT)-enabled Deep Learning Monitoring (IoT-E-DLM) model for real-time Athletic Performance (AP) tracking and feedback in collegiate sports. The proposed work integrates advanced wearable sensor technologies with a hybrid neural network combining Temporal Convolutional Networks, Bidirectional Long Short-Term Memory (TCN + BiLSTM) + Attention mechanisms. It is designed to overcome key challenges in processing heterogeneous, high-frequency sensor data and delivering low-latency, sport-specific feedback. The system deployed edge computing for real-time local processing and cloud setup for high-complexity analytics, achieving a balance between responsiveness and accuracy. Extensive research was tested with 147 student-athletes across numerous sports, including track and field, basketball, soccer, and swimming, over 12 months at Shangqiu University. The proposed model achieved a prediction accuracy of 93.45% with an average processing latency of 12.34 ms, outperforming conventional and state-of-the-art approaches. The system also demonstrated efficient resource usage (CPU: 68.34%, GPU: 72.56%), high data capture reliability (98.37%), and precise temporal synchronization. These results confirm the model's effectiveness in enabling real-time performance monitoring and feedback delivery, establishing a robust groundwork for future developments in Artificial Intelligence (AI)-driven sports analytics.
Additional Links: PMID-40759726
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40759726,
year = {2025},
author = {Hu, Y and Li, Y and Cui, B and Su, H and Zhu, P},
title = {Internet of things enabled deep learning monitoring system for realtime performance metrics and athlete feedback in college sports.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {28405},
pmid = {40759726},
issn = {2045-2322},
mesh = {Humans ; *Deep Learning ; *Athletic Performance ; *Athletes ; *Internet of Things ; Universities ; Neural Networks, Computer ; Wearable Electronic Devices ; Male ; *Sports ; Feedback ; Young Adult ; },
abstract = {This study presents an Internet of Things (IoT)-enabled Deep Learning Monitoring (IoT-E-DLM) model for real-time Athletic Performance (AP) tracking and feedback in collegiate sports. The proposed work integrates advanced wearable sensor technologies with a hybrid neural network combining Temporal Convolutional Networks, Bidirectional Long Short-Term Memory (TCN + BiLSTM) + Attention mechanisms. It is designed to overcome key challenges in processing heterogeneous, high-frequency sensor data and delivering low-latency, sport-specific feedback. The system deployed edge computing for real-time local processing and cloud setup for high-complexity analytics, achieving a balance between responsiveness and accuracy. Extensive research was tested with 147 student-athletes across numerous sports, including track and field, basketball, soccer, and swimming, over 12 months at Shangqiu University. The proposed model achieved a prediction accuracy of 93.45% with an average processing latency of 12.34 ms, outperforming conventional and state-of-the-art approaches. The system also demonstrated efficient resource usage (CPU: 68.34%, GPU: 72.56%), high data capture reliability (98.37%), and precise temporal synchronization. These results confirm the model's effectiveness in enabling real-time performance monitoring and feedback delivery, establishing a robust groundwork for future developments in Artificial Intelligence (AI)-driven sports analytics.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Deep Learning
*Athletic Performance
*Athletes
*Internet of Things
Universities
Neural Networks, Computer
Wearable Electronic Devices
Male
*Sports
Feedback
Young Adult
RevDate: 2025-08-03
Q2C: A software for managing mass spectrometry facilities.
Journal of proteomics, 321:105511 pii:S1874-3919(25)00138-1 [Epub ahead of print].
We present Q2C, an open-source software designed to streamline mass spectrometer queue management and assess performance based on quality control metrics. Q2C provides a fast and user-friendly interface to visualize projects queues, manage analysis schedules and keep track of samples that were already processed. Our software includes analytical tools to ensure equipment calibration and provides comprehensive log documentation for machine maintenance, enhancing operational efficiency and reliability. Additionally, Q2C integrates with Google™ Cloud, allowing users to access and manage the software from different locations while keeping all data synchronized and seamlessly integrated across the system. For multi-user environments, Q2C implements a write-locking mechanism that checks for concurrent operations before saving data. When conflicts are detected, subsequent write requests are automatically queued to prevent data corruption, while the interface continuously refreshes to display the most current information from the cloud storage. Finally, Q2C, a demonstration video, and a user tutorial are freely available for academic use at https://github.com/diogobor/Q2C. Data are available from the ProteomeXchange consortium (identifier PXD055186). SIGNIFICANCE: Q2C addresses a critical gap in mass spectrometry facility management by unifying sample queue management with instrument performance monitoring. It ensures optimal instrument utilization, reduces turnaround times, and enhances data quality by dynamically prioritizing and routing samples based on analysis type and urgency. Unlike existing tools, Q2C integrates queue control and QC in a single platform, maximizing operational efficiency and reliability.
Additional Links: PMID-40752643
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40752643,
year = {2025},
author = {Lima, DB and Ruwolt, M and Santos, MDM and Pu, K and Liu, F and Carvalho, PC},
title = {Q2C: A software for managing mass spectrometry facilities.},
journal = {Journal of proteomics},
volume = {321},
number = {},
pages = {105511},
doi = {10.1016/j.jprot.2025.105511},
pmid = {40752643},
issn = {1876-7737},
abstract = {We present Q2C, an open-source software designed to streamline mass spectrometer queue management and assess performance based on quality control metrics. Q2C provides a fast and user-friendly interface to visualize projects queues, manage analysis schedules and keep track of samples that were already processed. Our software includes analytical tools to ensure equipment calibration and provides comprehensive log documentation for machine maintenance, enhancing operational efficiency and reliability. Additionally, Q2C integrates with Google™ Cloud, allowing users to access and manage the software from different locations while keeping all data synchronized and seamlessly integrated across the system. For multi-user environments, Q2C implements a write-locking mechanism that checks for concurrent operations before saving data. When conflicts are detected, subsequent write requests are automatically queued to prevent data corruption, while the interface continuously refreshes to display the most current information from the cloud storage. Finally, Q2C, a demonstration video, and a user tutorial are freely available for academic use at https://github.com/diogobor/Q2C. Data are available from the ProteomeXchange consortium (identifier PXD055186). SIGNIFICANCE: Q2C addresses a critical gap in mass spectrometry facility management by unifying sample queue management with instrument performance monitoring. It ensures optimal instrument utilization, reduces turnaround times, and enhances data quality by dynamically prioritizing and routing samples based on analysis type and urgency. Unlike existing tools, Q2C integrates queue control and QC in a single platform, maximizing operational efficiency and reliability.},
}
RevDate: 2025-08-06
Multihop cost awareness task migration with networking load balance technology for vehicular edge computing.
Scientific reports, 15(1):28126.
6G technology aims to revolutionize the mobile communication industry by revamping the role of vehicular wireless connections. Its network architecture will evolve towards multi-access edge computing (MEC) distributing cloud applications to support inter-vehicle applications such as cooperative driving. As the number of tasks offloaded to MEC servers increases, local MEC servers associated with vehicles may encounter insufficient computing resources for task offloading. This issue can be mitigated if neighboring servers can collaboratively provide computing capabilities to the local server for task migration. This paper investigates dynamic resource allocation and task migration mechanisms for cooperative vehicular edge computing (VEC) servers to expand computing capabilities of local server. Then, the multihop cost awareness task migration (MCATM) mechanism is proposed in this paper, which ensures that tasks can be migrated to the most suitable VEC server when the local server is overloaded. The MCATM mechanism begins by addressing whether the nearest VEC server can handle the computational tasks. We subsequently address the issue of duplicate selection to choose an appropriate VEC server for task migration among n-hop neighboring servers. Next, we focus on finding efficient transmission paths between the local and destination VEC servers to facilitate seamless task migration. The MCATM includes (i) the weight variable analytic hierarchy process (WVAHP) to select a suitable server among multihop cooperative VEC servers for task migration, and (ii) the pre-allocation with cost balance (PACB) path selection algorithm. The simulation results demonstrate that the MCATM enables the migration of computational tasks to appropriate neighboring VEC servers with the aim of increasing the task migration success rate while balancing network traffic and computing server capabilities.
Additional Links: PMID-40750821
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40750821,
year = {2025},
author = {Lin, SY and Wang, JQ and Peng, SM and Yang, MH and Jia, S},
title = {Multihop cost awareness task migration with networking load balance technology for vehicular edge computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {28126},
pmid = {40750821},
issn = {2045-2322},
support = {KJRC2023029//Weifang University of Science and Technology/ ; 2023CXGC010111//Natural Science Foundation of Shandong Province/ ; },
abstract = {6G technology aims to revolutionize the mobile communication industry by revamping the role of vehicular wireless connections. Its network architecture will evolve towards multi-access edge computing (MEC) distributing cloud applications to support inter-vehicle applications such as cooperative driving. As the number of tasks offloaded to MEC servers increases, local MEC servers associated with vehicles may encounter insufficient computing resources for task offloading. This issue can be mitigated if neighboring servers can collaboratively provide computing capabilities to the local server for task migration. This paper investigates dynamic resource allocation and task migration mechanisms for cooperative vehicular edge computing (VEC) servers to expand computing capabilities of local server. Then, the multihop cost awareness task migration (MCATM) mechanism is proposed in this paper, which ensures that tasks can be migrated to the most suitable VEC server when the local server is overloaded. The MCATM mechanism begins by addressing whether the nearest VEC server can handle the computational tasks. We subsequently address the issue of duplicate selection to choose an appropriate VEC server for task migration among n-hop neighboring servers. Next, we focus on finding efficient transmission paths between the local and destination VEC servers to facilitate seamless task migration. The MCATM includes (i) the weight variable analytic hierarchy process (WVAHP) to select a suitable server among multihop cooperative VEC servers for task migration, and (ii) the pre-allocation with cost balance (PACB) path selection algorithm. The simulation results demonstrate that the MCATM enables the migration of computational tasks to appropriate neighboring VEC servers with the aim of increasing the task migration success rate while balancing network traffic and computing server capabilities.},
}
RevDate: 2025-08-03
Accelerating structural dynamics through integrated research informatics.
Structural dynamics (Melville, N.Y.), 12(4):041101.
Structural dynamics research requires robust computational methods, reliable software, accessible data, and scalable infrastructure. Managing these components is complex and directly affects reproducibility and efficiency. The SBGrid Consortium addresses these challenges through a three-pillar approach that encompasses Software, Data, and Infrastructure, designed to foster a consistent and rigorous computational environment. At the core is the SBGrid software collection (>620 curated applications), supported by the Capsules Software Execution Environment, which ensures conflict-free, version-controlled execution. The SBGrid Data Bank supports open science by enabling the publication of primary experimental data. SBCloud, a fully managed cloud computing platform, provides scalable, on-demand infrastructure optimized for structural biology workloads. Together, they reduce computational friction, enabling researchers to focus on interpreting time-resolved data, modeling structural transitions, and managing large simulation datasets for advancing structural dynamics. This integrated platform delivers a reliable and accessible foundation for computationally intensive research across diverse scientific fields sharing common computational methods.
Additional Links: PMID-40747000
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40747000,
year = {2025},
author = {Eisenbraun, B and Ho, A and Meyer, PA and Sliz, P},
title = {Accelerating structural dynamics through integrated research informatics.},
journal = {Structural dynamics (Melville, N.Y.)},
volume = {12},
number = {4},
pages = {041101},
pmid = {40747000},
issn = {2329-7778},
abstract = {Structural dynamics research requires robust computational methods, reliable software, accessible data, and scalable infrastructure. Managing these components is complex and directly affects reproducibility and efficiency. The SBGrid Consortium addresses these challenges through a three-pillar approach that encompasses Software, Data, and Infrastructure, designed to foster a consistent and rigorous computational environment. At the core is the SBGrid software collection (>620 curated applications), supported by the Capsules Software Execution Environment, which ensures conflict-free, version-controlled execution. The SBGrid Data Bank supports open science by enabling the publication of primary experimental data. SBCloud, a fully managed cloud computing platform, provides scalable, on-demand infrastructure optimized for structural biology workloads. Together, they reduce computational friction, enabling researchers to focus on interpreting time-resolved data, modeling structural transitions, and managing large simulation datasets for advancing structural dynamics. This integrated platform delivers a reliable and accessible foundation for computationally intensive research across diverse scientific fields sharing common computational methods.},
}
RevDate: 2025-08-03
Whole Slide Imaging (WSI) in Pathology: Emerging Trends and Future Applications in Clinical Diagnostics, Medical Education, and Pathology.
Iranian journal of pathology, 20(3):257-265.
BACKGROUND & OBJECTIVE: Whole Slide Imaging (WSI) has emerged as a transformative technology in the fields of clinical diagnostics, medical education, and pathology research. By digitizing entire glass slides into high-resolution images, WSI enables advanced remote collaboration, the integration of artificial intelligence (AI) into diagnostic workflows, and facilitates large-scale data sharing for multi-center research.
METHODS: This paper explores the growing applications of WSI, focusing on its impact on diagnostics through telepathology, AI-powered diagnoses and precision medicine, and educational advancements. In this report, we will highlight the profound impact of WSI and address the challenges that must be overcome to enable its broader adoption.
RESULTS & CONCLUSION: Despite its many advantages, challenges such as infrastructure limitations and regulatory issues need to be addressed for broader adoption. The future of WSI lies in its ability to integrate with cloud-based platforms and big data analytics, continuing to drive the digital transformation of pathology.
Additional Links: PMID-40746923
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40746923,
year = {2025},
author = {Masjoodi, S and Anbardar, MH and Shokripour, M and Omidifar, N},
title = {Whole Slide Imaging (WSI) in Pathology: Emerging Trends and Future Applications in Clinical Diagnostics, Medical Education, and Pathology.},
journal = {Iranian journal of pathology},
volume = {20},
number = {3},
pages = {257-265},
pmid = {40746923},
issn = {1735-5303},
abstract = {BACKGROUND & OBJECTIVE: Whole Slide Imaging (WSI) has emerged as a transformative technology in the fields of clinical diagnostics, medical education, and pathology research. By digitizing entire glass slides into high-resolution images, WSI enables advanced remote collaboration, the integration of artificial intelligence (AI) into diagnostic workflows, and facilitates large-scale data sharing for multi-center research.
METHODS: This paper explores the growing applications of WSI, focusing on its impact on diagnostics through telepathology, AI-powered diagnoses and precision medicine, and educational advancements. In this report, we will highlight the profound impact of WSI and address the challenges that must be overcome to enable its broader adoption.
RESULTS & CONCLUSION: Despite its many advantages, challenges such as infrastructure limitations and regulatory issues need to be addressed for broader adoption. The future of WSI lies in its ability to integrate with cloud-based platforms and big data analytics, continuing to drive the digital transformation of pathology.},
}
RevDate: 2025-08-18
CmpDate: 2025-07-31
Automating Colon Polyp Classification in Digital Pathology by Evaluation of a "Machine Learning as a Service" AI Model: Algorithm Development and Validation Study.
JMIR formative research, 9:e67457.
BACKGROUND: Artificial intelligence (AI) models are increasingly being developed to improve the efficiency of pathological diagnoses. Rapid technological advancements are leading to more widespread availability of AI models that can be used by domain-specific experts (ie, pathologists and medical imaging professionals). This study presents an innovative AI model for the classification of colon polyps, developed using AutoML algorithms that are readily available from cloud-based machine learning platforms. Our aim was to explore if such AutoML algorithms could generate robust machine learning models that are directly applicable to the field of digital pathology.
OBJECTIVE: The objective of this study was to evaluate the effectiveness of AutoML algorithms in generating robust machine learning models for the classification of colon polyps and to assess their potential applicability in digital pathology.
METHODS: Whole-slide images from both public and institutional databases were used to develop a training set for 3 classifications of common entities found in colon polyps: hyperplastic polyps, tubular adenomas, and normal colon. The AI model was developed using an AutoML algorithm from Google's VertexAI platform. A test subset of the data was withheld to assess model accuracy, sensitivity, and specificity.
RESULTS: The AI model displayed a high accuracy rate, identifying tubular adenoma and hyperplastic polyps with 100% success and normal colon with 97% success. Sensitivity and specificity error rates were very low.
CONCLUSIONS: This study demonstrates how accessible AutoML algorithms can readily be used in digital pathology to develop diagnostic AI models using whole-slide images. Such models could be used by pathologists to improve diagnostic efficiency.
Additional Links: PMID-40743515
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40743515,
year = {2025},
author = {Beyer, D and Delancey, E and McLeod, L},
title = {Automating Colon Polyp Classification in Digital Pathology by Evaluation of a "Machine Learning as a Service" AI Model: Algorithm Development and Validation Study.},
journal = {JMIR formative research},
volume = {9},
number = {},
pages = {e67457},
pmid = {40743515},
issn = {2561-326X},
mesh = {Humans ; *Colonic Polyps/classification/pathology/diagnosis ; *Machine Learning ; *Algorithms ; Artificial Intelligence ; Sensitivity and Specificity ; },
abstract = {BACKGROUND: Artificial intelligence (AI) models are increasingly being developed to improve the efficiency of pathological diagnoses. Rapid technological advancements are leading to more widespread availability of AI models that can be used by domain-specific experts (ie, pathologists and medical imaging professionals). This study presents an innovative AI model for the classification of colon polyps, developed using AutoML algorithms that are readily available from cloud-based machine learning platforms. Our aim was to explore if such AutoML algorithms could generate robust machine learning models that are directly applicable to the field of digital pathology.
OBJECTIVE: The objective of this study was to evaluate the effectiveness of AutoML algorithms in generating robust machine learning models for the classification of colon polyps and to assess their potential applicability in digital pathology.
METHODS: Whole-slide images from both public and institutional databases were used to develop a training set for 3 classifications of common entities found in colon polyps: hyperplastic polyps, tubular adenomas, and normal colon. The AI model was developed using an AutoML algorithm from Google's VertexAI platform. A test subset of the data was withheld to assess model accuracy, sensitivity, and specificity.
RESULTS: The AI model displayed a high accuracy rate, identifying tubular adenoma and hyperplastic polyps with 100% success and normal colon with 97% success. Sensitivity and specificity error rates were very low.
CONCLUSIONS: This study demonstrates how accessible AutoML algorithms can readily be used in digital pathology to develop diagnostic AI models using whole-slide images. Such models could be used by pathologists to improve diagnostic efficiency.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Colonic Polyps/classification/pathology/diagnosis
*Machine Learning
*Algorithms
Artificial Intelligence
Sensitivity and Specificity
RevDate: 2025-08-02
Breaking barriers: broadening neuroscience education via cloud platforms and course-based undergraduate research.
Frontiers in neuroinformatics, 19:1608900.
This study demonstrates the effectiveness of integrating cloud computing platforms with Course-based Undergraduate Research Experiences (CUREs) to broaden access to neuroscience education. Over four consecutive spring semesters (2021-2024), a total of 42 undergraduate students at Lawrence Technological University participated in computational neuroscience CUREs using brainlife.io, a cloud-computing platform. Students conducted anatomical and functional brain imaging analyses on openly available datasets, testing original hypotheses about brain structure variations. The program evolved from initial data processing to hypothesis-driven research exploring the influence of age, gender, and pathology on brain structures. By combining open science and big data within a user-friendly cloud environment, the CURE model provided hands-on, problem-based learning to students with limited prior knowledge. This approach addressed key limitations of traditional undergraduate research experiences, including scalability, early exposure, and inclusivity. Students consistently worked with MRI datasets, focusing on volumetric analysis of brain structures, and developed scientific communication skills by presenting findings at annual research days. The success of this program demonstrates its potential to democratize neuroscience education, enabling advanced research without extensive laboratory facilities or prior experience, and promoting original undergraduate research using real-world datasets.
Additional Links: PMID-40740546
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40740546,
year = {2025},
author = {Delogu, F and Aspinall, C and Ray, K and Heinsfeld, AS and Victory, C and Pestilli, F},
title = {Breaking barriers: broadening neuroscience education via cloud platforms and course-based undergraduate research.},
journal = {Frontiers in neuroinformatics},
volume = {19},
number = {},
pages = {1608900},
pmid = {40740546},
issn = {1662-5196},
abstract = {This study demonstrates the effectiveness of integrating cloud computing platforms with Course-based Undergraduate Research Experiences (CUREs) to broaden access to neuroscience education. Over four consecutive spring semesters (2021-2024), a total of 42 undergraduate students at Lawrence Technological University participated in computational neuroscience CUREs using brainlife.io, a cloud-computing platform. Students conducted anatomical and functional brain imaging analyses on openly available datasets, testing original hypotheses about brain structure variations. The program evolved from initial data processing to hypothesis-driven research exploring the influence of age, gender, and pathology on brain structures. By combining open science and big data within a user-friendly cloud environment, the CURE model provided hands-on, problem-based learning to students with limited prior knowledge. This approach addressed key limitations of traditional undergraduate research experiences, including scalability, early exposure, and inclusivity. Students consistently worked with MRI datasets, focusing on volumetric analysis of brain structures, and developed scientific communication skills by presenting findings at annual research days. The success of this program demonstrates its potential to democratize neuroscience education, enabling advanced research without extensive laboratory facilities or prior experience, and promoting original undergraduate research using real-world datasets.},
}
RevDate: 2025-08-01
Indoor Localization Using Multi-Bluetooth Beacon Deployment in a Sparse Edge Computing Environment.
Digital twins and applications, 2(1):.
Bluetooth low energy (BLE)-based indoor localization has been extensively researched due to its cost-effectiveness, low power consumption, and ubiquity. Despite these advantages, the variability of received signal strength indicator (RSSI) measurements, influenced by physical obstacles, human presence, and electronic interference, poses a significant challenge to accurate localization. In this work, we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency (RF)-dense modern building environment. Through a proof-of-concept study, we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m, whereas additional beacons offer minimal incremental benefit in such settings. Furthermore, our framework for BLE-based localization, implemented on an edge network of Raspberry Pies, has been released under an open-source license, enabling broader application and further research.
Additional Links: PMID-40735132
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40735132,
year = {2025},
author = {Saghafi, S and Kiarashi, Y and Rodriguez, AD and Levey, AI and Kwon, H and Clifford, GD},
title = {Indoor Localization Using Multi-Bluetooth Beacon Deployment in a Sparse Edge Computing Environment.},
journal = {Digital twins and applications},
volume = {2},
number = {1},
pages = {},
pmid = {40735132},
issn = {2995-2182},
support = {R21 DC021029/DC/NIDCD NIH HHS/United States ; },
abstract = {Bluetooth low energy (BLE)-based indoor localization has been extensively researched due to its cost-effectiveness, low power consumption, and ubiquity. Despite these advantages, the variability of received signal strength indicator (RSSI) measurements, influenced by physical obstacles, human presence, and electronic interference, poses a significant challenge to accurate localization. In this work, we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency (RF)-dense modern building environment. Through a proof-of-concept study, we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m, whereas additional beacons offer minimal incremental benefit in such settings. Furthermore, our framework for BLE-based localization, implemented on an edge network of Raspberry Pies, has been released under an open-source license, enabling broader application and further research.},
}
RevDate: 2025-08-02
CmpDate: 2025-07-30
IoMT Architecture for Fully Automated Point-of-Care Molecular Diagnostic Device.
Sensors (Basel, Switzerland), 25(14):.
The Internet of Medical Things (IoMT) is revolutionizing healthcare by integrating smart diagnostic devices with cloud computing and real-time data analytics. The emergence of infectious diseases, including COVID-19, underscores the need for rapid and decentralized diagnostics to facilitate early intervention. Traditional centralized laboratory testing introduces delays, limiting timely medical responses. While point-of-care molecular diagnostic (POC-MD) systems offer an alternative, challenges remain in cost, accessibility, and network inefficiencies. This study proposes an IoMT-based architecture for fully automated POC-MD devices, leveraging WebSockets for optimized communication, enhancing microfluidic cartridge efficiency, and integrating a hardware-based emulator for real-time validation. The system incorporates DNA extraction and real-time polymerase chain reaction functionalities into modular, networked components, improving flexibility and scalability. Although the system itself has not yet undergone clinical validation, it builds upon the core cartridge and detection architecture of a previously validated cartridge-based platform for Chlamydia trachomatis and Neisseria gonorrhoeae (CT/NG). These pathogens were selected due to their global prevalence, high asymptomatic transmission rates, and clinical importance in reproductive health. In a previous clinical study involving 510 patient specimens, the system demonstrated high concordance with a commercial assay with limits of detection below 10 copies/μL, supporting the feasibility of this architecture for point-of-care molecular diagnostics. By addressing existing limitations, this system establishes a new standard for next-generation diagnostics, ensuring rapid, reliable, and accessible disease detection.
Additional Links: PMID-40732552
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40732552,
year = {2025},
author = {Kim, MG and Kil, BH and Ryu, MH and Kim, JD},
title = {IoMT Architecture for Fully Automated Point-of-Care Molecular Diagnostic Device.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {14},
pages = {},
pmid = {40732552},
issn = {1424-8220},
mesh = {Humans ; *COVID-19/diagnosis ; *Point-of-Care Systems ; SARS-CoV-2/isolation & purification ; Chlamydia trachomatis/genetics/isolation & purification ; *Molecular Diagnostic Techniques/instrumentation/methods ; *Internet of Things ; Neisseria gonorrhoeae/genetics/isolation & purification ; Point-of-Care Testing ; Cloud Computing ; },
abstract = {The Internet of Medical Things (IoMT) is revolutionizing healthcare by integrating smart diagnostic devices with cloud computing and real-time data analytics. The emergence of infectious diseases, including COVID-19, underscores the need for rapid and decentralized diagnostics to facilitate early intervention. Traditional centralized laboratory testing introduces delays, limiting timely medical responses. While point-of-care molecular diagnostic (POC-MD) systems offer an alternative, challenges remain in cost, accessibility, and network inefficiencies. This study proposes an IoMT-based architecture for fully automated POC-MD devices, leveraging WebSockets for optimized communication, enhancing microfluidic cartridge efficiency, and integrating a hardware-based emulator for real-time validation. The system incorporates DNA extraction and real-time polymerase chain reaction functionalities into modular, networked components, improving flexibility and scalability. Although the system itself has not yet undergone clinical validation, it builds upon the core cartridge and detection architecture of a previously validated cartridge-based platform for Chlamydia trachomatis and Neisseria gonorrhoeae (CT/NG). These pathogens were selected due to their global prevalence, high asymptomatic transmission rates, and clinical importance in reproductive health. In a previous clinical study involving 510 patient specimens, the system demonstrated high concordance with a commercial assay with limits of detection below 10 copies/μL, supporting the feasibility of this architecture for point-of-care molecular diagnostics. By addressing existing limitations, this system establishes a new standard for next-generation diagnostics, ensuring rapid, reliable, and accessible disease detection.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*COVID-19/diagnosis
*Point-of-Care Systems
SARS-CoV-2/isolation & purification
Chlamydia trachomatis/genetics/isolation & purification
*Molecular Diagnostic Techniques/instrumentation/methods
*Internet of Things
Neisseria gonorrhoeae/genetics/isolation & purification
Point-of-Care Testing
Cloud Computing
RevDate: 2025-08-02
DFPS: An Efficient Downsampling Algorithm Designed for the Global Feature Preservation of Large-Scale Point Cloud Data.
Sensors (Basel, Switzerland), 25(14):.
This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature retention of point cloud data with computational efficiency, making it highly adaptable to the growing trend of large-scale 3D point cloud datasets. DFPS is designed with a multithreaded parallel acceleration architecture, which significantly enhances processing speed. Experimental results demonstrate that, for a point cloud dataset containing millions of points, DFPS reduces processing time from approximately 161,665 s using the original FPS method to approximately 71.64 s at a 12.5% sampling rate, achieving an efficiency improvement of over 2200 times. As the sampling rate decreases, the performance advantage becomes more pronounced: at a 3.125% sampling rate, the efficiency improves by nearly 10,000 times. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information. Notably, DFPS does not depend on GPU-based heterogeneous computing, enabling seamless deployment in resource-constrained environments such as airborne and mobile devices, which makes DFPS an effective and lightweighting tool for providing high-quality input data for subsequent algorithms, including point cloud registration and semantic segmentation.
Additional Links: PMID-40732410
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40732410,
year = {2025},
author = {Dong, J and Tian, M and Yu, J and Li, G and Wang, Y and Su, Y},
title = {DFPS: An Efficient Downsampling Algorithm Designed for the Global Feature Preservation of Large-Scale Point Cloud Data.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {14},
pages = {},
pmid = {40732410},
issn = {1424-8220},
support = {No. 42106180//National Natural Science Foundation of China/ ; },
abstract = {This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature retention of point cloud data with computational efficiency, making it highly adaptable to the growing trend of large-scale 3D point cloud datasets. DFPS is designed with a multithreaded parallel acceleration architecture, which significantly enhances processing speed. Experimental results demonstrate that, for a point cloud dataset containing millions of points, DFPS reduces processing time from approximately 161,665 s using the original FPS method to approximately 71.64 s at a 12.5% sampling rate, achieving an efficiency improvement of over 2200 times. As the sampling rate decreases, the performance advantage becomes more pronounced: at a 3.125% sampling rate, the efficiency improves by nearly 10,000 times. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information. Notably, DFPS does not depend on GPU-based heterogeneous computing, enabling seamless deployment in resource-constrained environments such as airborne and mobile devices, which makes DFPS an effective and lightweighting tool for providing high-quality input data for subsequent algorithms, including point cloud registration and semantic segmentation.},
}
RevDate: 2025-08-01
CmpDate: 2025-07-30
High-resolution phenomics dataset collected on a field-grown, EMS-mutagenized sorghum population evaluated in hot, arid conditions.
BMC research notes, 18(1):332.
OBJECTIVES: The University of Arizona Field Scanner (FS) is capable of generating massive amounts of data from a variety of instruments at high spatial and temporal resolution. The accompanying field infrastructure beneath the system offers capacity for controlled irrigation regimes in a hot, arid environment. Approximately 194 terabytes of raw and processed phenotypic image data were generated over two growing seasons (2020 and 2022) on a population of 434 sequence-indexed, EMS-mutagenized sorghum lines in the genetic background BTx623; the population was grown under well-watered and water-limited conditions. Collectively, these data enable links between genotype and dynamic, drought-responsive phenotypes, which can accelerate crop improvement efforts. However, analysis of these data can be challenging for researchers without background knowledge of the system and preliminary processing.
DATA DESCRIPTION: This dataset contains formatted tabular data generated from sensing system outputs suitable for a wide range of end-users and includes plant-level bounding areas, temperatures, and point cloud characteristics, as well as plot-level photosynthetic parameters and accompanying weather data. The dataset includes approximately 422 megabytes of tabular data totaling 1,903,412 unique unfiltered rows of FS data, 526,917 cleaned rows of FS data, and 285 rows of weather data from the two field seasons.
Additional Links: PMID-40731363
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40731363,
year = {2025},
author = {Demieville, J and Dilkes, B and Eveland, AL and Pauli, D},
title = {High-resolution phenomics dataset collected on a field-grown, EMS-mutagenized sorghum population evaluated in hot, arid conditions.},
journal = {BMC research notes},
volume = {18},
number = {1},
pages = {332},
pmid = {40731363},
issn = {1756-0500},
support = {DE-AR0001101//United States Department of Energy Advanced Projects Research Agency-Energy/ ; DE-AR0001101//United States Department of Energy Advanced Projects Research Agency-Energy/ ; DE-SC0020401//United States Department of Energy Biological and Environmental Research Program/ ; DE-SC0020401//United States Department of Energy Biological and Environmental Research Program/ ; DE-SC0020401//United States Department of Energy Biological and Environmental Research Program/ ; DE-SC0020401//United States Department of Energy Biological and Environmental Research Program/ ; 2021-51181-35903//United States Department of Agriculture National Institute of Food and Agriculture Specialty Crops Research Initiative/ ; 2021-51181-35903//United States Department of Agriculture National Institute of Food and Agriculture Specialty Crops Research Initiative/ ; DBI 2417511//National Science Foundation Division of Biological Infrastructure/ ; DBI 2417511//National Science Foundation Division of Biological Infrastructure/ ; IOS 2102120//National Science Foundation Division of Integrative Organismal Systems/ ; },
mesh = {*Sorghum/genetics/growth & development ; *Phenomics/methods ; Phenotype ; Droughts ; Arizona ; Seasons ; Genotype ; Hot Temperature ; },
abstract = {OBJECTIVES: The University of Arizona Field Scanner (FS) is capable of generating massive amounts of data from a variety of instruments at high spatial and temporal resolution. The accompanying field infrastructure beneath the system offers capacity for controlled irrigation regimes in a hot, arid environment. Approximately 194 terabytes of raw and processed phenotypic image data were generated over two growing seasons (2020 and 2022) on a population of 434 sequence-indexed, EMS-mutagenized sorghum lines in the genetic background BTx623; the population was grown under well-watered and water-limited conditions. Collectively, these data enable links between genotype and dynamic, drought-responsive phenotypes, which can accelerate crop improvement efforts. However, analysis of these data can be challenging for researchers without background knowledge of the system and preliminary processing.
DATA DESCRIPTION: This dataset contains formatted tabular data generated from sensing system outputs suitable for a wide range of end-users and includes plant-level bounding areas, temperatures, and point cloud characteristics, as well as plot-level photosynthetic parameters and accompanying weather data. The dataset includes approximately 422 megabytes of tabular data totaling 1,903,412 unique unfiltered rows of FS data, 526,917 cleaned rows of FS data, and 285 rows of weather data from the two field seasons.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Sorghum/genetics/growth & development
*Phenomics/methods
Phenotype
Droughts
Arizona
Seasons
Genotype
Hot Temperature
RevDate: 2025-07-31
Machine Learning-based Complementary Artificial Intelligence Model for Dermoscopic Diagnosis of Pigmented Skin Lesions in Resource-limited Settings.
Plastic and reconstructive surgery. Global open, 13(7):e7004.
BACKGROUND: Rapid advancements in big data and machine learning have expanded their application in healthcare, introducing sophisticated diagnostics to settings with limited medical resources. Notably, free artificial intelligence (AI) services that require no programming skills are now accessible to healthcare professionals, allowing those in underresourced areas to leverage AI technology. This study aimed to evaluate the potential of these accessible services for diagnosing pigmented skin tumors, underscoring the democratization of advanced medical technologies.
METHODS: In this experimental diagnostic study, we collected 400 dermoscopic images (100 per tumor type) labeled through supervised learning from pathologically confirmed cases. The images were split into training, validation, and testing datasets (8:1:1 ratio) and uploaded to Vertex AI for model training. Supervised learning was performed using the Google Cloud Platform, Vertex AI, based on pathological diagnoses. The model's performance was assessed using confusion matrices and precision-recall curves.
RESULTS: The AI model achieved an average recall rate of 86.3%, precision rate of 87.3%, accuracy of 86.3%, and F1 score of 0.87. Misclassification rates were less than 20% for each category. Accuracy was 80% for malignant melanoma and 100% for both basal cell carcinoma and seborrheic keratosis. Testing on separate cases yielded an accuracy of approximately 70%.
CONCLUSIONS: The metrics obtained in this study suggest that the model can reliably assist in the diagnostic process, even for practitioners without prior AI expertise. The study demonstrated that free AI tools can accurately classify pigmented skin lesions with minimal expertise, potentially providing high-precision diagnostic support in settings lacking dermatologists.
Additional Links: PMID-40727626
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40727626,
year = {2025},
author = {Kaneko, R and Akaishi, S and Ogawa, R and Kuwahara, H},
title = {Machine Learning-based Complementary Artificial Intelligence Model for Dermoscopic Diagnosis of Pigmented Skin Lesions in Resource-limited Settings.},
journal = {Plastic and reconstructive surgery. Global open},
volume = {13},
number = {7},
pages = {e7004},
pmid = {40727626},
issn = {2169-7574},
abstract = {BACKGROUND: Rapid advancements in big data and machine learning have expanded their application in healthcare, introducing sophisticated diagnostics to settings with limited medical resources. Notably, free artificial intelligence (AI) services that require no programming skills are now accessible to healthcare professionals, allowing those in underresourced areas to leverage AI technology. This study aimed to evaluate the potential of these accessible services for diagnosing pigmented skin tumors, underscoring the democratization of advanced medical technologies.
METHODS: In this experimental diagnostic study, we collected 400 dermoscopic images (100 per tumor type) labeled through supervised learning from pathologically confirmed cases. The images were split into training, validation, and testing datasets (8:1:1 ratio) and uploaded to Vertex AI for model training. Supervised learning was performed using the Google Cloud Platform, Vertex AI, based on pathological diagnoses. The model's performance was assessed using confusion matrices and precision-recall curves.
RESULTS: The AI model achieved an average recall rate of 86.3%, precision rate of 87.3%, accuracy of 86.3%, and F1 score of 0.87. Misclassification rates were less than 20% for each category. Accuracy was 80% for malignant melanoma and 100% for both basal cell carcinoma and seborrheic keratosis. Testing on separate cases yielded an accuracy of approximately 70%.
CONCLUSIONS: The metrics obtained in this study suggest that the model can reliably assist in the diagnostic process, even for practitioners without prior AI expertise. The study demonstrated that free AI tools can accurately classify pigmented skin lesions with minimal expertise, potentially providing high-precision diagnostic support in settings lacking dermatologists.},
}
RevDate: 2025-07-31
Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing.
Entropy (Basel, Switzerland), 27(7):.
Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency.
Additional Links: PMID-40724469
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40724469,
year = {2025},
author = {Zhao, M and Chen, H},
title = {Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing.},
journal = {Entropy (Basel, Switzerland)},
volume = {27},
number = {7},
pages = {},
pmid = {40724469},
issn = {1099-4300},
support = {3282024048//Fundamental Research Funds for the Central Universities/ ; },
abstract = {Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency.},
}
RevDate: 2025-07-31
Simon's Algorithm in the NISQ Cloud.
Entropy (Basel, Switzerland), 27(7):.
Simon's algorithm was one of the first to demonstrate a genuine quantum advantage in solving a problem. The algorithm, however, assumes access to fault-tolerant qubits. In our work, we use Simon's algorithm to benchmark the error rates of devices currently available in the "quantum cloud". As a main result, we objectively compare the different physical platforms made available by IBM and IonQ. Our study highlights the importance of understanding the device architectures and topologies when transpiling quantum algorithms onto hardware. For instance, we demonstrate that two-qubit operations on spatially separated qubits on superconducting chips should be avoided.
Additional Links: PMID-40724375
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40724375,
year = {2025},
author = {Robertson, R and Doucet, E and Spicer, E and Deffner, S},
title = {Simon's Algorithm in the NISQ Cloud.},
journal = {Entropy (Basel, Switzerland)},
volume = {27},
number = {7},
pages = {},
pmid = {40724375},
issn = {1099-4300},
support = {62422//John Templeton Foundation/ ; },
abstract = {Simon's algorithm was one of the first to demonstrate a genuine quantum advantage in solving a problem. The algorithm, however, assumes access to fault-tolerant qubits. In our work, we use Simon's algorithm to benchmark the error rates of devices currently available in the "quantum cloud". As a main result, we objectively compare the different physical platforms made available by IBM and IonQ. Our study highlights the importance of understanding the device architectures and topologies when transpiling quantum algorithms onto hardware. For instance, we demonstrate that two-qubit operations on spatially separated qubits on superconducting chips should be avoided.},
}
RevDate: 2025-08-01
Exploring the Influence of Human-Computer Interaction Experience on Tourist Loyalty in the Context of Smart Tourism: A Case Study of Suzhou Museum.
Behavioral sciences (Basel, Switzerland), 15(7):.
As digital technology evolves rapidly, smart tourism has become a significant trend in the modernization of the industry, relying on advanced tools like big data and cloud computing to improve travelers' experiences. Despite the growing use of human-computer interaction in museums, there remains a lack of in-depth academic investigation into its impact on visitors' behavioral intentions regarding museum engagement. This paper employs Cognitive Appraisal Theory, considers human-computer interaction experience as the independent variable, and introduces destination image and satisfaction as mediators to examine their impact on destination loyalty. Based on a survey of 537 participants, the research shows that human-computer interaction experience has a significant positive impact on destination image, satisfaction, and loyalty. Destination image and satisfaction play a partial and sequential mediating role in this relationship. This paper explores the influence mechanism of human-computer interaction experience on destination loyalty and proposes practical interactive solutions for museums, aiming to offer insights for smart tourism research and practice.
Additional Links: PMID-40723733
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40723733,
year = {2025},
author = {Xue, K and Jin, X and Li, Y},
title = {Exploring the Influence of Human-Computer Interaction Experience on Tourist Loyalty in the Context of Smart Tourism: A Case Study of Suzhou Museum.},
journal = {Behavioral sciences (Basel, Switzerland)},
volume = {15},
number = {7},
pages = {},
pmid = {40723733},
issn = {2076-328X},
abstract = {As digital technology evolves rapidly, smart tourism has become a significant trend in the modernization of the industry, relying on advanced tools like big data and cloud computing to improve travelers' experiences. Despite the growing use of human-computer interaction in museums, there remains a lack of in-depth academic investigation into its impact on visitors' behavioral intentions regarding museum engagement. This paper employs Cognitive Appraisal Theory, considers human-computer interaction experience as the independent variable, and introduces destination image and satisfaction as mediators to examine their impact on destination loyalty. Based on a survey of 537 participants, the research shows that human-computer interaction experience has a significant positive impact on destination image, satisfaction, and loyalty. Destination image and satisfaction play a partial and sequential mediating role in this relationship. This paper explores the influence mechanism of human-computer interaction experience on destination loyalty and proposes practical interactive solutions for museums, aiming to offer insights for smart tourism research and practice.},
}
RevDate: 2025-07-31
A compact public key encryption with equality test for lattice in cloud computing.
Scientific reports, 15(1):27426 pii:10.1038/s41598-025-12018-2.
The rapid proliferation of cloud computing enables users to access computing resources and storage space over the internet, but it also presents challenges in terms of security and privacy. Ensuring the security and availability of data has become a focal point of current research when utilizing cloud computing for resource sharing, data storage, and querying. Public key encryption with equality test (PKEET) can perform an equality test on ciphertexts without decrypting them, even when those ciphertexts are encrypted under different public keys. That offers a practical approach to dividing up or searching for encrypted information directly. In order to deal with the threat raised by the rapid development of quantum computing, researchers have proposed post-quantum cryptography to guarantee the security of cloud services. However, it is challenging to implement these techniques efficiently. In this paper, a compact PKEET scheme is pro-posed. The new scheme does not encrypt the plaintext's hash value immediately but embeds it into the test trapdoor. We also demon-strated that our new construction is one-way secure under the quantum security model. With those efforts, our scheme can withstand the chosen ciphertext attacks as long as the learning with errors (LWE) assumption holds. Furthermore, we evaluated the new scheme's performance and found that it only costs approximately half the storage space compared with previous schemes. There is an almost half reduction in the computing cost throughout the encryption and decryption stages. In a nutshell, the new PKEET scheme is less costly, more compact, and applicable to cloud computing scenarios in a post-quantum environment.
Additional Links: PMID-40721628
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40721628,
year = {2025},
author = {He, J and Ye, Q and Yang, Z and Wang, S and Wang, J},
title = {A compact public key encryption with equality test for lattice in cloud computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {27426},
doi = {10.1038/s41598-025-12018-2},
pmid = {40721628},
issn = {2045-2322},
support = {62202490//the National Natural Science Foundation of China/ ; 62102440//the National Natural Science Foundation of China/ ; },
abstract = {The rapid proliferation of cloud computing enables users to access computing resources and storage space over the internet, but it also presents challenges in terms of security and privacy. Ensuring the security and availability of data has become a focal point of current research when utilizing cloud computing for resource sharing, data storage, and querying. Public key encryption with equality test (PKEET) can perform an equality test on ciphertexts without decrypting them, even when those ciphertexts are encrypted under different public keys. That offers a practical approach to dividing up or searching for encrypted information directly. In order to deal with the threat raised by the rapid development of quantum computing, researchers have proposed post-quantum cryptography to guarantee the security of cloud services. However, it is challenging to implement these techniques efficiently. In this paper, a compact PKEET scheme is pro-posed. The new scheme does not encrypt the plaintext's hash value immediately but embeds it into the test trapdoor. We also demon-strated that our new construction is one-way secure under the quantum security model. With those efforts, our scheme can withstand the chosen ciphertext attacks as long as the learning with errors (LWE) assumption holds. Furthermore, we evaluated the new scheme's performance and found that it only costs approximately half the storage space compared with previous schemes. There is an almost half reduction in the computing cost throughout the encryption and decryption stages. In a nutshell, the new PKEET scheme is less costly, more compact, and applicable to cloud computing scenarios in a post-quantum environment.},
}
RevDate: 2025-07-31
Reproducibility assessment of magnetic resonance spectroscopy of pregenual anterior cingulate cortex across sessions and vendors via the cloud computing platform CloudBrain-MRS.
NeuroImage, 318:121400 pii:S1053-8119(25)00403-3 [Epub ahead of print].
Proton magnetic resonance spectroscopy ([1]H-MRS) has potential in clinical diagnosis and understanding the mechanism of illnesses. However, its application is limited by the lack of standardization in data acquisition and processing across time points and between different magnetic resonance imaging (MRI) system vendors. This study examines whether metabolite concentrations obtained from different sessions, scanner models, and vendors can be reliably reproduced and combined for diagnostic analysis-an important consideration for rare disease research. Participants underwent magnetic resonance scanning once on two separate days within one week (one session per day, each including two [1]H-MRS scans without subject movement) on each machine. Absolute metabolite concentrations were analyzed for reliability of within- and between- session using the coefficient of variation (CV), intraclass correlation coefficient (ICC) and Bland-Altman (BA) plot, and for reproducibility across the machines using the Pearson correlation coefficient. As for within- and between- session, most of the CV values for a group of all the first or second scans of a session, and from each session were below 20 %, and most of ICCs ranged from moderate (0.4≤ICC<0.59) to excellent (ICC≥0.75), which indicated high reliability. Most of the BA plots had the line of equality between 95 % confidence interval of bias (mean difference), therefore the differences over scanning time could be negligible. Majority of the Pearson correlation coefficients approached 1 with statistical significance (P < 0.001), showing high reproducibility across the three scanners. Additionally, the intra-vendor reproducibility was greater than the inter-vendor ones.
Additional Links: PMID-40716656
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40716656,
year = {2025},
author = {Chen, R and Lin, M and Chen, J and Lin, L and Wang, J and Li, X and Wang, J and Huang, X and Qian, L and Liu, S and Long, Y and Guo, D and Qu, X and Han, H},
title = {Reproducibility assessment of magnetic resonance spectroscopy of pregenual anterior cingulate cortex across sessions and vendors via the cloud computing platform CloudBrain-MRS.},
journal = {NeuroImage},
volume = {318},
number = {},
pages = {121400},
doi = {10.1016/j.neuroimage.2025.121400},
pmid = {40716656},
issn = {1095-9572},
abstract = {Proton magnetic resonance spectroscopy ([1]H-MRS) has potential in clinical diagnosis and understanding the mechanism of illnesses. However, its application is limited by the lack of standardization in data acquisition and processing across time points and between different magnetic resonance imaging (MRI) system vendors. This study examines whether metabolite concentrations obtained from different sessions, scanner models, and vendors can be reliably reproduced and combined for diagnostic analysis-an important consideration for rare disease research. Participants underwent magnetic resonance scanning once on two separate days within one week (one session per day, each including two [1]H-MRS scans without subject movement) on each machine. Absolute metabolite concentrations were analyzed for reliability of within- and between- session using the coefficient of variation (CV), intraclass correlation coefficient (ICC) and Bland-Altman (BA) plot, and for reproducibility across the machines using the Pearson correlation coefficient. As for within- and between- session, most of the CV values for a group of all the first or second scans of a session, and from each session were below 20 %, and most of ICCs ranged from moderate (0.4≤ICC<0.59) to excellent (ICC≥0.75), which indicated high reliability. Most of the BA plots had the line of equality between 95 % confidence interval of bias (mean difference), therefore the differences over scanning time could be negligible. Majority of the Pearson correlation coefficients approached 1 with statistical significance (P < 0.001), showing high reproducibility across the three scanners. Additionally, the intra-vendor reproducibility was greater than the inter-vendor ones.},
}
RevDate: 2025-07-31
Machine learning based multi-stage intrusion detection system and feature selection ensemble security in cloud assisted vehicular ad hoc networks.
Scientific reports, 15(1):27058.
The development of intelligent transportation systems relies heavily on Cloud-assisted Vehicular Ad Hoc Networks (VANETs); hence, these networks must be protected. Particularly susceptible to a broad range of assaults are VANETs because of their extreme dynamism and decentralization. Connected vehicles' safety and efficiency could be compromised if these security threats materialize, leading to disastrous road accidents. Solving these issues will require an advanced Intrusion Detection System (IDS) with real-time threat recognition and neutralization capabilities. A new method for improving VANET security, a multi-stage Lightweight IntrusionDetection System Using Random Forest Algorithms (MLIDS-RFA), focuses on feature selection and ensemble models based on machine learning (ML). A multi-step approach is employed by the proposed system, with each stage dedicated to accurately detecting specific types of attacks. Regarding feature selection, MLIDS-RFA uses machine-learning approaches to enhance the detection process. The outcome is a reduction in the amount of processing overhead and a shortening of the response times. The detection abilities of ensemble models are enhanced by integrating the strengths of the Random Forest algorithm (RFA), which safeguards against intricate dangers. The practicality of the proposed technology is demonstrated by conducting thorough simulation analyses. This research demonstrates that the system can reduce false positives while maintaining high detection rates. This research ensures next-generation transport networks' secure and reliable functioning and prepares the path for VANET protection upgrades. MLIDS-RFA has improved detection accuracy (96.2%) and computing efficiency (94.8%) for dynamic VANET management. It operates well with large networks (97.8%) and adapts well to network changes (93.8%). The comprehensive methodology ensures high detection performance (95.9%) and VANET security by balancing accuracy, efficiency, and scalability.
Additional Links: PMID-40715607
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40715607,
year = {2025},
author = {Christy, C and Nirmala, A and Teena, AMO and Amali, AI},
title = {Machine learning based multi-stage intrusion detection system and feature selection ensemble security in cloud assisted vehicular ad hoc networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {27058},
pmid = {40715607},
issn = {2045-2322},
abstract = {The development of intelligent transportation systems relies heavily on Cloud-assisted Vehicular Ad Hoc Networks (VANETs); hence, these networks must be protected. Particularly susceptible to a broad range of assaults are VANETs because of their extreme dynamism and decentralization. Connected vehicles' safety and efficiency could be compromised if these security threats materialize, leading to disastrous road accidents. Solving these issues will require an advanced Intrusion Detection System (IDS) with real-time threat recognition and neutralization capabilities. A new method for improving VANET security, a multi-stage Lightweight IntrusionDetection System Using Random Forest Algorithms (MLIDS-RFA), focuses on feature selection and ensemble models based on machine learning (ML). A multi-step approach is employed by the proposed system, with each stage dedicated to accurately detecting specific types of attacks. Regarding feature selection, MLIDS-RFA uses machine-learning approaches to enhance the detection process. The outcome is a reduction in the amount of processing overhead and a shortening of the response times. The detection abilities of ensemble models are enhanced by integrating the strengths of the Random Forest algorithm (RFA), which safeguards against intricate dangers. The practicality of the proposed technology is demonstrated by conducting thorough simulation analyses. This research demonstrates that the system can reduce false positives while maintaining high detection rates. This research ensures next-generation transport networks' secure and reliable functioning and prepares the path for VANET protection upgrades. MLIDS-RFA has improved detection accuracy (96.2%) and computing efficiency (94.8%) for dynamic VANET management. It operates well with large networks (97.8%) and adapts well to network changes (93.8%). The comprehensive methodology ensures high detection performance (95.9%) and VANET security by balancing accuracy, efficiency, and scalability.},
}
RevDate: 2025-07-31
Enhancing reliability and security in cloud-based telesurgery systems leveraging swarm-evoked distributed federated learning framework to mitigate multiple attacks.
Scientific reports, 15(1):27226.
Advances in robotic surgery are being driven by the convergence of technologies such as artificial intelligence (AI), 5G/6G wireless communication, the Internet of Things (IoT), and edge computing, enhancing clinical precision, speed, and real-time decision-making. However, the practical deployment of telesurgery and tele-mentoring remains constrained due to increasing cybersecurity threats, posing significant challenges to patient safety and system reliability. To address these issues, a distributed framework based on federated learning is proposed, integrating Optimized Gated Transformer Networks (OGTN) with layered chaotic encryption schemes to mitigate multiple unknown cyberattacks while preserving data privacy and integrity. The framework was implemented using TensorFlow Federated Learning Libraries (FLL) and evaluated on the UNSW-NB15 dataset. Performance was assessed using metrics including precision, accuracy, F1-score, recall, and security strength, and compared with existing approaches. In addition, structured and unstructured security assessments, including evaluations based on National Institute of Standards and Technology (NIST) recommendations, were performed to validate robustness. The proposed framework demonstrated superior performance in terms of diagnostic accuracy and cybersecurity resilience relative to conventional models. These results suggest that the framework is a viable candidate for integration into teleoperated healthcare systems, offering improved security and operational efficiency in robotic surgery applications.
Additional Links: PMID-40715332
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40715332,
year = {2025},
author = {Punitha, S and Preetha, KS},
title = {Enhancing reliability and security in cloud-based telesurgery systems leveraging swarm-evoked distributed federated learning framework to mitigate multiple attacks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {27226},
pmid = {40715332},
issn = {2045-2322},
abstract = {Advances in robotic surgery are being driven by the convergence of technologies such as artificial intelligence (AI), 5G/6G wireless communication, the Internet of Things (IoT), and edge computing, enhancing clinical precision, speed, and real-time decision-making. However, the practical deployment of telesurgery and tele-mentoring remains constrained due to increasing cybersecurity threats, posing significant challenges to patient safety and system reliability. To address these issues, a distributed framework based on federated learning is proposed, integrating Optimized Gated Transformer Networks (OGTN) with layered chaotic encryption schemes to mitigate multiple unknown cyberattacks while preserving data privacy and integrity. The framework was implemented using TensorFlow Federated Learning Libraries (FLL) and evaluated on the UNSW-NB15 dataset. Performance was assessed using metrics including precision, accuracy, F1-score, recall, and security strength, and compared with existing approaches. In addition, structured and unstructured security assessments, including evaluations based on National Institute of Standards and Technology (NIST) recommendations, were performed to validate robustness. The proposed framework demonstrated superior performance in terms of diagnostic accuracy and cybersecurity resilience relative to conventional models. These results suggest that the framework is a viable candidate for integration into teleoperated healthcare systems, offering improved security and operational efficiency in robotic surgery applications.},
}
RevDate: 2025-08-07
Implementing a training resource for large-scale genomic data analysis in the All of Us Researcher Workbench.
American journal of human genetics [Epub ahead of print].
A lack of representation in genomic research and limited access to computational training create barriers for many researchers seeking to analyze large-scale genetic datasets. The All of Us Research Program provides an unprecedented opportunity to address these gaps by offering genomic data from a broad range of participants, but its impact depends on equipping researchers with the necessary skills to use it effectively. The All of Us Biomedical Researcher (BR) Scholars Program at Baylor College of Medicine aims to break down these barriers by providing early-career researchers with hands-on training in computational genomics through the All of Us Evenings with Genetics Research Program. The year-long program begins with the faculty summit, an in-person computational boot camp that introduces scholars to foundational skills for using the All of Us dataset via a cloud-based research environment. The genomics tutorials focus on genome-wide association studies (GWASs), utilizing Jupyter Notebooks and the Hail computing framework to provide an accessible and scalable approach to large-scale data analysis. Scholars engage in hands-on exercises covering data preparation, quality control, association testing, and result interpretation. By the end of the summit, participants will have successfully conducted a GWAS, visualized key findings, and gained confidence in computational resource management. This initiative expands access to genomic research by equipping early-career researchers from a variety of backgrounds with the tools and knowledge to analyze All of Us data. By lowering barriers to entry and promoting the study of representative populations, the program fosters innovation in precision medicine and advances equity in genomic research.
Additional Links: PMID-40701146
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40701146,
year = {2025},
author = {Baker, J and Stricker, E and Coleman, J and Ketkar, S and Tan, T and Butler, AM and Williams, L and Hammonds-Odie, L and Murray, D and Lee, B and Worley, KC and Atkinson, EG},
title = {Implementing a training resource for large-scale genomic data analysis in the All of Us Researcher Workbench.},
journal = {American journal of human genetics},
volume = {},
number = {},
pages = {},
pmid = {40701146},
issn = {1537-6605},
support = {OT2 OD026556/OD/NIH HHS/United States ; U2C OD023196/OD/NIH HHS/United States ; OT2 OD025315/OD/NIH HHS/United States ; OT2 OD026551/OD/NIH HHS/United States ; U24 OD023121/OD/NIH HHS/United States ; OT2 OD026552/OD/NIH HHS/United States ; OT2 OD026549/OD/NIH HHS/United States ; OT2 OD025337/OD/NIH HHS/United States ; OT2 OD026555/OD/NIH HHS/United States ; OT2 OD026550/OD/NIH HHS/United States ; OT2 OD026553/OD/NIH HHS/United States ; OT2 OD023205/OD/NIH HHS/United States ; OT2 OD025276/OD/NIH HHS/United States ; OT2 OD026557/OD/NIH HHS/United States ; OT2 OD026554/OD/NIH HHS/United States ; U24 OD023163/OD/NIH HHS/United States ; OT2 OD023206/OD/NIH HHS/United States ; U24 OD023176/OD/NIH HHS/United States ; OT2 OD026548/OD/NIH HHS/United States ; OT2 OD025277/OD/NIH HHS/United States ; OT2 OD031932/OD/NIH HHS/United States ; },
abstract = {A lack of representation in genomic research and limited access to computational training create barriers for many researchers seeking to analyze large-scale genetic datasets. The All of Us Research Program provides an unprecedented opportunity to address these gaps by offering genomic data from a broad range of participants, but its impact depends on equipping researchers with the necessary skills to use it effectively. The All of Us Biomedical Researcher (BR) Scholars Program at Baylor College of Medicine aims to break down these barriers by providing early-career researchers with hands-on training in computational genomics through the All of Us Evenings with Genetics Research Program. The year-long program begins with the faculty summit, an in-person computational boot camp that introduces scholars to foundational skills for using the All of Us dataset via a cloud-based research environment. The genomics tutorials focus on genome-wide association studies (GWASs), utilizing Jupyter Notebooks and the Hail computing framework to provide an accessible and scalable approach to large-scale data analysis. Scholars engage in hands-on exercises covering data preparation, quality control, association testing, and result interpretation. By the end of the summit, participants will have successfully conducted a GWAS, visualized key findings, and gained confidence in computational resource management. This initiative expands access to genomic research by equipping early-career researchers from a variety of backgrounds with the tools and knowledge to analyze All of Us data. By lowering barriers to entry and promoting the study of representative populations, the program fosters innovation in precision medicine and advances equity in genomic research.},
}
RevDate: 2025-07-24
Multidisciplinary Evaluation of an AI-Based Pneumothorax Detection Model: Clinical Comparison with Physicians in Edge and Cloud Environments.
Journal of multidisciplinary healthcare, 18:4099-4111.
BACKGROUND: Accurate and timely detection of pneumothorax on chest radiographs is critical in emergency and critical care settings. While subtle cases remain challenging for clinicians, artificial intelligence (AI) offers promise as a diagnostic aid. This retrospective diagnostic accuracy study evaluates a deep learning model developed using Google Cloud Vertex AI for pneumothorax detection on chest X-rays.
METHODS: A total of 152 anonymized frontal chest radiographs (76 pneumothorax, 76 normal), confirmed by computed tomography (CT), were collected from a single center between 2023 and 2024. The median patient age was 50 years (range: 18-95), with 67.1% male. The AI model was trained using AutoML Vision and evaluated in both cloud and edge deployment environments. Diagnostic accuracy metrics-including sensitivity, specificity, and F1 score-were compared with those of 15 physicians from four specialties (general practice, emergency medicine, thoracic surgery, radiology), stratified by experience level. Subgroup analysis focused on minimal pneumothorax cases. Confidence intervals were calculated using the Wilson method.
RESULTS: In cloud deployment, the AI model achieved an overall diagnostic accuracy of 0.95 (95% CI: 0.83, 0.99), sensitivity of 1.00 (95% CI: 0.83, 1.00), specificity of 0.89 (95% CI: 0.69, 0.97), and F1 score of 0.95 (95% CI: 0.86, 1.00). Comparable performance was observed in edge mode. The model outperformed junior clinicians and matched or exceeded senior physicians, particularly in detecting minimal pneumothoraces, where AI sensitivity reached 0.93 (95% CI: 0.79, 0.97) compared to 0.55 (95% CI: 0.38, 0.69) - 0.84 (95% CI: 0.69, 0.92) among human readers.
CONCLUSION: The Google Cloud Vertex AI model demonstrates high diagnostic performance for pneumothorax detection, including subtle cases. Its consistent accuracy across edge and cloud settings supports its integration as a second reader or triage tool in diverse clinical workflows, especially in acute care or resource-limited environments.
Additional Links: PMID-40693169
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40693169,
year = {2025},
author = {Dal, I and Kaya, HB},
title = {Multidisciplinary Evaluation of an AI-Based Pneumothorax Detection Model: Clinical Comparison with Physicians in Edge and Cloud Environments.},
journal = {Journal of multidisciplinary healthcare},
volume = {18},
number = {},
pages = {4099-4111},
pmid = {40693169},
issn = {1178-2390},
abstract = {BACKGROUND: Accurate and timely detection of pneumothorax on chest radiographs is critical in emergency and critical care settings. While subtle cases remain challenging for clinicians, artificial intelligence (AI) offers promise as a diagnostic aid. This retrospective diagnostic accuracy study evaluates a deep learning model developed using Google Cloud Vertex AI for pneumothorax detection on chest X-rays.
METHODS: A total of 152 anonymized frontal chest radiographs (76 pneumothorax, 76 normal), confirmed by computed tomography (CT), were collected from a single center between 2023 and 2024. The median patient age was 50 years (range: 18-95), with 67.1% male. The AI model was trained using AutoML Vision and evaluated in both cloud and edge deployment environments. Diagnostic accuracy metrics-including sensitivity, specificity, and F1 score-were compared with those of 15 physicians from four specialties (general practice, emergency medicine, thoracic surgery, radiology), stratified by experience level. Subgroup analysis focused on minimal pneumothorax cases. Confidence intervals were calculated using the Wilson method.
RESULTS: In cloud deployment, the AI model achieved an overall diagnostic accuracy of 0.95 (95% CI: 0.83, 0.99), sensitivity of 1.00 (95% CI: 0.83, 1.00), specificity of 0.89 (95% CI: 0.69, 0.97), and F1 score of 0.95 (95% CI: 0.86, 1.00). Comparable performance was observed in edge mode. The model outperformed junior clinicians and matched or exceeded senior physicians, particularly in detecting minimal pneumothoraces, where AI sensitivity reached 0.93 (95% CI: 0.79, 0.97) compared to 0.55 (95% CI: 0.38, 0.69) - 0.84 (95% CI: 0.69, 0.92) among human readers.
CONCLUSION: The Google Cloud Vertex AI model demonstrates high diagnostic performance for pneumothorax detection, including subtle cases. Its consistent accuracy across edge and cloud settings supports its integration as a second reader or triage tool in diverse clinical workflows, especially in acute care or resource-limited environments.},
}
RevDate: 2025-07-21
Pediatrics 4.0: the Transformative Impacts of the Latest Industrial Revolution on Pediatrics.
Health care analysis : HCA : journal of health philosophy and policy [Epub ahead of print].
Industry 4.0 represents the latest phase of industrial evolution, characterized by the seamless integration of cyber-physical systems, the Internet of Things, big data analytics, artificial intelligence, advanced robotics, and cloud computing, enabling smart, adaptive, and interconnected processes where physical, digital, and biological realms converge. In parallel, healthcare has progressed from the traditional, physician-centered model of Healthcare 1.0 by introducing medical devices and digitized records to Healthcare 4.0, which leverages Industry 4.0 technologies to create personalized, data-driven, and patient-centric systems. In this context, we hereby introduce Pediatrics 4.0 as a new paradigm that adapts these innovations to children's unique developmental, physiological, and ethical considerations and aims to improve diagnostic precision, treatment personalization, and continuous monitoring in pediatric populations. Key applications include AI-driven diagnostic and predictive analytics, IoT-enabled remote monitoring, big data-powered epidemiological insights, robotic assistance in surgery and rehabilitation, and 3D printing for patient-specific devices and pharmaceuticals. However, realizing Pediatrics 4.0 requires addressing significant challenges-data privacy and security, algorithmic bias, interoperability and standardization, equitable access, regulatory alignment, the ethical complexities of consent, and long-term technology exposure. Future research should focus on explainable AI, pediatric-specific device design, robust data governance frameworks, dynamic ethical and legal guidelines, interdisciplinary collaboration, and workforce training to ensure these transformative technologies translate into safer, more effective, and more equitable child healthcare.
Additional Links: PMID-40690134
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40690134,
year = {2025},
author = {Onur, D and Özbakır, Ç},
title = {Pediatrics 4.0: the Transformative Impacts of the Latest Industrial Revolution on Pediatrics.},
journal = {Health care analysis : HCA : journal of health philosophy and policy},
volume = {},
number = {},
pages = {},
pmid = {40690134},
issn = {1573-3394},
abstract = {Industry 4.0 represents the latest phase of industrial evolution, characterized by the seamless integration of cyber-physical systems, the Internet of Things, big data analytics, artificial intelligence, advanced robotics, and cloud computing, enabling smart, adaptive, and interconnected processes where physical, digital, and biological realms converge. In parallel, healthcare has progressed from the traditional, physician-centered model of Healthcare 1.0 by introducing medical devices and digitized records to Healthcare 4.0, which leverages Industry 4.0 technologies to create personalized, data-driven, and patient-centric systems. In this context, we hereby introduce Pediatrics 4.0 as a new paradigm that adapts these innovations to children's unique developmental, physiological, and ethical considerations and aims to improve diagnostic precision, treatment personalization, and continuous monitoring in pediatric populations. Key applications include AI-driven diagnostic and predictive analytics, IoT-enabled remote monitoring, big data-powered epidemiological insights, robotic assistance in surgery and rehabilitation, and 3D printing for patient-specific devices and pharmaceuticals. However, realizing Pediatrics 4.0 requires addressing significant challenges-data privacy and security, algorithmic bias, interoperability and standardization, equitable access, regulatory alignment, the ethical complexities of consent, and long-term technology exposure. Future research should focus on explainable AI, pediatric-specific device design, robust data governance frameworks, dynamic ethical and legal guidelines, interdisciplinary collaboration, and workforce training to ensure these transformative technologies translate into safer, more effective, and more equitable child healthcare.},
}
RevDate: 2025-07-21
IoT-enabled medical advances shaping the future of orthopaedic surgery and rehabilitation.
Journal of clinical orthopaedics and trauma, 68:103113.
The Internet of Things (IoT) connects smart devices to enable automation and data exchange. IoT is rapidly transforming the healthcare industry. Understanding of the framework and challenges of IoT is essential for effective implementation. This review explores the advances in IoT technology in orthopaedic surgery and rehabilitation. A comprehensive literature search was conducted by the author using databases such as PubMed, Scopus, and Google Scholar. Relevant peer-reviewed articles published between 2010 and 2024 were preferred based on their focus on IoT applications in orthopaedic surgery, rehabilitation, and assistive technologies. Keywords including "Internet of Things," "orthopaedic rehabilitation," "wearable sensors," and "smart health monitoring" were used. Studies were analysed to identify current trends, clinical relevance, and future opportunities in IoT-driven orthopaedic care. The reviewed studies demonstrate that IoT technologies, such as wearable motion sensors, smart implants, real-time rehabilitation platforms, and AI-powered analytics, have significantly improved orthopaedic surgical outcomes and patient recovery. These systems enable continuous monitoring, early complication detection, and adaptive rehabilitation. However, challenges persist in data security, device interoperability, user compliance, and standardisation across platforms. IoT holds great promise in enhancing orthopaedic surgery and rehabilitation by enabling real-time monitoring and personalised care. Moving forward, clinical validation, user-friendly designs, and strong data security will be key to its successful integration in routine practice.
Additional Links: PMID-40687746
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40687746,
year = {2025},
author = {Parashar, B and Malviya, R and Sridhar, SB and Wadhwa, T and Shareef, J},
title = {IoT-enabled medical advances shaping the future of orthopaedic surgery and rehabilitation.},
journal = {Journal of clinical orthopaedics and trauma},
volume = {68},
number = {},
pages = {103113},
pmid = {40687746},
issn = {0976-5662},
abstract = {The Internet of Things (IoT) connects smart devices to enable automation and data exchange. IoT is rapidly transforming the healthcare industry. Understanding of the framework and challenges of IoT is essential for effective implementation. This review explores the advances in IoT technology in orthopaedic surgery and rehabilitation. A comprehensive literature search was conducted by the author using databases such as PubMed, Scopus, and Google Scholar. Relevant peer-reviewed articles published between 2010 and 2024 were preferred based on their focus on IoT applications in orthopaedic surgery, rehabilitation, and assistive technologies. Keywords including "Internet of Things," "orthopaedic rehabilitation," "wearable sensors," and "smart health monitoring" were used. Studies were analysed to identify current trends, clinical relevance, and future opportunities in IoT-driven orthopaedic care. The reviewed studies demonstrate that IoT technologies, such as wearable motion sensors, smart implants, real-time rehabilitation platforms, and AI-powered analytics, have significantly improved orthopaedic surgical outcomes and patient recovery. These systems enable continuous monitoring, early complication detection, and adaptive rehabilitation. However, challenges persist in data security, device interoperability, user compliance, and standardisation across platforms. IoT holds great promise in enhancing orthopaedic surgery and rehabilitation by enabling real-time monitoring and personalised care. Moving forward, clinical validation, user-friendly designs, and strong data security will be key to its successful integration in routine practice.},
}
RevDate: 2025-07-21
Cloud Computing Facilitating Data Storage, Collaboration, and Analysis in Global Healthcare Clinical Trials.
Reviews on recent clinical trials pii:RRCT-EPUB-149483 [Epub ahead of print].
INTRODUCTION: Healthcare data management, especially in the context of clinical trials, has been completely transformed by cloud computing. It makes it easier to store data, collaborate in real time, and perform advanced analytics across international research networks by providing scalable, secure, and affordable solutions. This paper explores how cloud computing is revolutionizing clinical trials, tackling issues including data integration, accessibility, and regulatory compliance.
MATERIALS AND METHODS: Key factors assessed include cloud platform-enabled analytical tools, collaborative features, and data storage capacity. To ensure the safe management of sensitive healthcare data, adherence to laws like GDPR and HIPAA was emphasized.
RESULTS: Real-time updates and integration of multicenter trial data were made possible by cloud systems, which also showed notable gains in collaborative workflows and data sharing. High scalability storage options reduced infrastructure expenses while upholding security requirements. Rapid interpretation of complicated datasets was made possible by sophisticated analytical tools driven by machine learning and artificial intelligence, which expedited decision-making. Improved patient recruitment tactics and flexible trial designs are noteworthy examples.
CONCLUSION: Cloud computing has become essential for international clinical trials because it provides unmatched efficiency in data analysis, communication, and storage. It is a pillar of contemporary healthcare research due to its capacity to guarantee data security and regulatory compliance as well as its creative analytical capabilities. Subsequent research ought to concentrate on further refining cloud solutions to tackle new issues and utilizing their complete capabilities in clinical trial administration.
Additional Links: PMID-40685723
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40685723,
year = {2025},
author = {Gomase, VS and Ghatule, AP and Sharma, R and Sardana, S and Dhamane, SP},
title = {Cloud Computing Facilitating Data Storage, Collaboration, and Analysis in Global Healthcare Clinical Trials.},
journal = {Reviews on recent clinical trials},
volume = {},
number = {},
pages = {},
doi = {10.2174/0115748871379249250701065507},
pmid = {40685723},
issn = {1876-1038},
abstract = {INTRODUCTION: Healthcare data management, especially in the context of clinical trials, has been completely transformed by cloud computing. It makes it easier to store data, collaborate in real time, and perform advanced analytics across international research networks by providing scalable, secure, and affordable solutions. This paper explores how cloud computing is revolutionizing clinical trials, tackling issues including data integration, accessibility, and regulatory compliance.
MATERIALS AND METHODS: Key factors assessed include cloud platform-enabled analytical tools, collaborative features, and data storage capacity. To ensure the safe management of sensitive healthcare data, adherence to laws like GDPR and HIPAA was emphasized.
RESULTS: Real-time updates and integration of multicenter trial data were made possible by cloud systems, which also showed notable gains in collaborative workflows and data sharing. High scalability storage options reduced infrastructure expenses while upholding security requirements. Rapid interpretation of complicated datasets was made possible by sophisticated analytical tools driven by machine learning and artificial intelligence, which expedited decision-making. Improved patient recruitment tactics and flexible trial designs are noteworthy examples.
CONCLUSION: Cloud computing has become essential for international clinical trials because it provides unmatched efficiency in data analysis, communication, and storage. It is a pillar of contemporary healthcare research due to its capacity to guarantee data security and regulatory compliance as well as its creative analytical capabilities. Subsequent research ought to concentrate on further refining cloud solutions to tackle new issues and utilizing their complete capabilities in clinical trial administration.},
}
RevDate: 2025-07-23
A smart grid data sharing scheme supporting policy update and traceability.
Scientific reports, 15(1):26343 pii:10.1038/s41598-025-10704-9.
To address the problems of centralized attribute authority, inefficient encryption and invalid access control strategy in the data sharing scheme based on attribute-based encryption technology, a smart grid data sharing scheme that supports policy update and traceability is proposed. The smart contract of the blockchain is used to generate the user's key, which does not require a centralized attribute authority. Combined with attribute-based encryption and symmetric encryption technology, the confidentiality of smart grid data is protected and flexible data access control is achieved. In addition, online/offline encryption and outsourced computing technologies complete most of the computing tasks in the offline stage or cloud server, which greatly reduces the computing burden of data owners and data access users. By introducing the access control policy update mechanism, the data owner can flexibly modify the key ciphertext stored in the cloud server. Finally, the analysis results show that this scheme can protect the privacy of smart grid data, verify the integrity of smart grid data, resist collusion attacks and track the identity of malicious users who leak private keys, and its efficiency is better than similar data sharing schemes.
Additional Links: PMID-40685425
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40685425,
year = {2025},
author = {Yang, X and Yao, K and Li, S and Du, X and Wang, C},
title = {A smart grid data sharing scheme supporting policy update and traceability.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {26343},
doi = {10.1038/s41598-025-10704-9},
pmid = {40685425},
issn = {2045-2322},
support = {no. 2023CYZC-09//the Industrial Support Plan Project of Gansu Provincial Education Department/ ; no. 23YFGA0081//the Key Research and Development Program of Gansu Province/ ; nos. 62362059, 62172337//the National Natural Science Foundation of China/ ; },
abstract = {To address the problems of centralized attribute authority, inefficient encryption and invalid access control strategy in the data sharing scheme based on attribute-based encryption technology, a smart grid data sharing scheme that supports policy update and traceability is proposed. The smart contract of the blockchain is used to generate the user's key, which does not require a centralized attribute authority. Combined with attribute-based encryption and symmetric encryption technology, the confidentiality of smart grid data is protected and flexible data access control is achieved. In addition, online/offline encryption and outsourced computing technologies complete most of the computing tasks in the offline stage or cloud server, which greatly reduces the computing burden of data owners and data access users. By introducing the access control policy update mechanism, the data owner can flexibly modify the key ciphertext stored in the cloud server. Finally, the analysis results show that this scheme can protect the privacy of smart grid data, verify the integrity of smart grid data, resist collusion attacks and track the identity of malicious users who leak private keys, and its efficiency is better than similar data sharing schemes.},
}
RevDate: 2025-07-23
Optimization and benefit evaluation model of a cloud computing-based platform for power enterprises.
Scientific reports, 15(1):26366.
To address the challenges associated with the digital transformation of the power industry, this research develops an optimization and benefit evaluation model for cloud computing platforms tailored to power enterprises. It responds to the current lack of systematic optimization mechanisms and evaluation methods in existing cloud computing applications. The proposed model focuses on resource scheduling optimization, task load balancing, and improvements in computational efficiency. A multidimensional optimization framework is constructed, integrating key parameters such as path planning, condition coefficient computation, and the regulation of task and average loads. The model employs an improved lightweight genetic algorithm combined with an elastic resource allocation strategy to dynamically adapt to task changes across various operational scenarios. Experimental results indicate a 46% reduction in failure recovery time, a 78% improvement in high-load throughput capacity, and an average increase of nearly 60% in resource utilization. Compared with traditional on-premise architectures and static scheduling models, the proposed approach offers notable advantages in computational response time and fault tolerance. In addition, through containerized deployment and intelligent orchestration, it achieves a 43% reduction in monthly operating costs. A multi-level benefit evaluation system-spanning power generation, grid operations, and end-user services-is established, integrating historical data, expert weighting, and dynamic optimization algorithms to enable quantitative performance assessment and decision support. In contrast to existing studies that mainly address isolated functional modules such as equipment health monitoring or collaborative design, this research presents a novel paradigm characterized by architectural integration, methodological versatility, and industrial applicability. It thus addresses the empirical gap in multi-objective optimization for industrial-scale power systems. The theoretical contribution of this research lies in the establishment of a highly scalable and integrated framework for optimization and evaluation. Its practical significance is reflected in the notable improvements in operational efficiency and cost control in real-world applications. The proposed model provides a clear trajectory and quantitative foundation for promoting an efficient and intelligent cloud computing ecosystem in the power sector.
Additional Links: PMID-40685390
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40685390,
year = {2025},
author = {Yin, X and Zhang, X and Pei, L and Hu, R and Ye, K and Cai, K},
title = {Optimization and benefit evaluation model of a cloud computing-based platform for power enterprises.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {26366},
pmid = {40685390},
issn = {2045-2322},
abstract = {To address the challenges associated with the digital transformation of the power industry, this research develops an optimization and benefit evaluation model for cloud computing platforms tailored to power enterprises. It responds to the current lack of systematic optimization mechanisms and evaluation methods in existing cloud computing applications. The proposed model focuses on resource scheduling optimization, task load balancing, and improvements in computational efficiency. A multidimensional optimization framework is constructed, integrating key parameters such as path planning, condition coefficient computation, and the regulation of task and average loads. The model employs an improved lightweight genetic algorithm combined with an elastic resource allocation strategy to dynamically adapt to task changes across various operational scenarios. Experimental results indicate a 46% reduction in failure recovery time, a 78% improvement in high-load throughput capacity, and an average increase of nearly 60% in resource utilization. Compared with traditional on-premise architectures and static scheduling models, the proposed approach offers notable advantages in computational response time and fault tolerance. In addition, through containerized deployment and intelligent orchestration, it achieves a 43% reduction in monthly operating costs. A multi-level benefit evaluation system-spanning power generation, grid operations, and end-user services-is established, integrating historical data, expert weighting, and dynamic optimization algorithms to enable quantitative performance assessment and decision support. In contrast to existing studies that mainly address isolated functional modules such as equipment health monitoring or collaborative design, this research presents a novel paradigm characterized by architectural integration, methodological versatility, and industrial applicability. It thus addresses the empirical gap in multi-objective optimization for industrial-scale power systems. The theoretical contribution of this research lies in the establishment of a highly scalable and integrated framework for optimization and evaluation. Its practical significance is reflected in the notable improvements in operational efficiency and cost control in real-world applications. The proposed model provides a clear trajectory and quantitative foundation for promoting an efficient and intelligent cloud computing ecosystem in the power sector.},
}
RevDate: 2025-07-22
Construction and efficiency analysis of an embedded system-based verification platform for edge computing.
Scientific reports, 15(1):26114.
With the profound convergence and advancement of the Internet of Things, big data analytics, and artificial intelligence technologies, edge computing-a novel computing paradigm-has garnered significant attention. While edge computing simulation platforms offer convenience for simulations and tests, the disparity between them and real-world environments remains a notable concern. These platforms often struggle to precisely mimic the interactive behaviors and physical attributes of actual devices. Moreover, they face constraints in real-time responsiveness and scalability, thus limiting their ability to truly reflect practical application scenarios. To address these obstacles, our study introduces an innovative physical verification platform for edge computing, grounded in embedded devices. This platform seamlessly integrates KubeEdge and Serverless technological frameworks, facilitating dynamic resource allocation and efficient utilization. Additionally, by leveraging the robust infrastructure and cloud services provided by Alibaba Cloud, we have significantly bolstered the system's stability and scalability. To ensure a comprehensive assessment of our architecture's performance, we have established a realistic edge computing testing environment, utilizing embedded devices like Raspberry Pi. Through rigorous experimental validations involving offloading strategies, we have observed impressive outcomes. The refined offloading approach exhibits outstanding results in critical metrics, including latency, energy consumption, and load balancing. This not only underscores the soundness and reliability of our platform design but also illustrates its versatility for deployment in a broad spectrum of application contexts.
Additional Links: PMID-40681592
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40681592,
year = {2025},
author = {Cao, J and Yu, Z and Zhu, B and Cao, M and Yang, J},
title = {Construction and efficiency analysis of an embedded system-based verification platform for edge computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {26114},
pmid = {40681592},
issn = {2045-2322},
support = {U22B2005//National Natural Science Foundation of China/ ; 62071481 and 61501471//National Natural Science Foundation of China/ ; },
abstract = {With the profound convergence and advancement of the Internet of Things, big data analytics, and artificial intelligence technologies, edge computing-a novel computing paradigm-has garnered significant attention. While edge computing simulation platforms offer convenience for simulations and tests, the disparity between them and real-world environments remains a notable concern. These platforms often struggle to precisely mimic the interactive behaviors and physical attributes of actual devices. Moreover, they face constraints in real-time responsiveness and scalability, thus limiting their ability to truly reflect practical application scenarios. To address these obstacles, our study introduces an innovative physical verification platform for edge computing, grounded in embedded devices. This platform seamlessly integrates KubeEdge and Serverless technological frameworks, facilitating dynamic resource allocation and efficient utilization. Additionally, by leveraging the robust infrastructure and cloud services provided by Alibaba Cloud, we have significantly bolstered the system's stability and scalability. To ensure a comprehensive assessment of our architecture's performance, we have established a realistic edge computing testing environment, utilizing embedded devices like Raspberry Pi. Through rigorous experimental validations involving offloading strategies, we have observed impressive outcomes. The refined offloading approach exhibits outstanding results in critical metrics, including latency, energy consumption, and load balancing. This not only underscores the soundness and reliability of our platform design but also illustrates its versatility for deployment in a broad spectrum of application contexts.},
}
RevDate: 2025-07-20
Achieving cloud resource optimization with trust-based access control: A novel ML strategy for enhanced performance.
MethodsX, 15:103461.
Cloud computing continues to rise, increasing the demand for more intelligent, rapid, and secure resource management. This paper presents AdaPCA-a novel method that integrates the adaptive capabilities of AdaBoost with the dimensionality-reduction efficacy of PCA. What is the objective? Enhance trust-based access control and resource allocation decisions while maintaining a minimal computational burden. High-dimensional trust data frequently hampers systems; however, AdaPCA mitigates this issue by identifying essential aspects and enhancing learning efficacy concurrently. To evaluate its performance, we conducted a series of simulations comparing it with established methods such as Decision Trees, Random Forests, and Gradient Boosting. We assessed execution time, resource use, latency, and trust accuracy. Results show that AdaPCA achieved a trust score prediction accuracy of 99.8 %, a resource utilization efficiency of 95 %, and reduced allocation time to 140 ms, outperforming the benchmark models across all evaluated parameters. AdaPCA had superior performance overall-expedited decision-making, optimized resource utilization, reduced latency, and the highest accuracy in trust evaluation among the evaluated models. AdaPCA is not merely another model; it represents a significant advancement towards more intelligent and safe cloud systems designed for the future.•Introduces AdaPCA, a novel hybrid approach that integrates AdaBoost with PCA to optimize cloud resource allocation and improve trust-based access control.•Outperforms conventional techniques such as Decision Tree, Random Forest, and Gradient Boosting by attaining superior trust accuracy, expedited execution, enhanced resource utilization, and reduced latency.•Presents an intelligent, scalable, and adaptable architecture for secure and efficient management of cloud resources, substantiated by extensive simulation experiments.
Additional Links: PMID-40678452
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40678452,
year = {2025},
author = {C, BS and St, B and S, S},
title = {Achieving cloud resource optimization with trust-based access control: A novel ML strategy for enhanced performance.},
journal = {MethodsX},
volume = {15},
number = {},
pages = {103461},
pmid = {40678452},
issn = {2215-0161},
abstract = {Cloud computing continues to rise, increasing the demand for more intelligent, rapid, and secure resource management. This paper presents AdaPCA-a novel method that integrates the adaptive capabilities of AdaBoost with the dimensionality-reduction efficacy of PCA. What is the objective? Enhance trust-based access control and resource allocation decisions while maintaining a minimal computational burden. High-dimensional trust data frequently hampers systems; however, AdaPCA mitigates this issue by identifying essential aspects and enhancing learning efficacy concurrently. To evaluate its performance, we conducted a series of simulations comparing it with established methods such as Decision Trees, Random Forests, and Gradient Boosting. We assessed execution time, resource use, latency, and trust accuracy. Results show that AdaPCA achieved a trust score prediction accuracy of 99.8 %, a resource utilization efficiency of 95 %, and reduced allocation time to 140 ms, outperforming the benchmark models across all evaluated parameters. AdaPCA had superior performance overall-expedited decision-making, optimized resource utilization, reduced latency, and the highest accuracy in trust evaluation among the evaluated models. AdaPCA is not merely another model; it represents a significant advancement towards more intelligent and safe cloud systems designed for the future.•Introduces AdaPCA, a novel hybrid approach that integrates AdaBoost with PCA to optimize cloud resource allocation and improve trust-based access control.•Outperforms conventional techniques such as Decision Tree, Random Forest, and Gradient Boosting by attaining superior trust accuracy, expedited execution, enhanced resource utilization, and reduced latency.•Presents an intelligent, scalable, and adaptable architecture for secure and efficient management of cloud resources, substantiated by extensive simulation experiments.},
}
RevDate: 2025-07-18
CmpDate: 2025-07-18
[Spatiotemporal Evolution of Ecological Environment Quality and Ecological Management Zoning in Inner Mongolia Based on RSEI].
Huan jing ke xue= Huanjing kexue, 46(7):4499-4509.
Inner Mongolia serves as a crucial ecological security barrier for northern China. Examining the spatial and temporal evolution of ecological environment quality, along with the zoning for ecological management, is crucial for enhancing the management and development of ecological environments. Based on the Google Earth Engine cloud platform, four indicators-heat, greenness, dryness, and wetness-were extracted from MODIS remote sensing image data spanning 2000 to 2023. The remote sensing ecological index (RESI) model was constructed using principal component analysis. By combining the coefficient of variation (CV), Sen + Mann-Kendall, and Hurst indices, the spatial and temporal variations and future trends of ecological environmental quality of the Inner Mongolia were analyzed. The influencing mechanisms were explored using a geographical detector, and the quadrant method was employed for ecological management zoning based on the intensity of human activities and the quality of the ecological environment. The results indicated that: ① The ecological environment quality of Inner Mongolia from 2000 to 2023 was mainly characterized as poor to average, with a spatial trend of decreasing quality from east to west. From 2000 to 2005, Inner Mongolia experienced environmental degradation, followed by a gradual improvement in ecological environment quality. ② Inner Mongolia exhibited the largest area of non-significantly improved and non-significantly degraded regions, and the overall environmental quality was more stable. However, ecosystems in the western region were more fragile and prone to fluctuations. The area of sustained degradation versus sustained improvement in the future trend of change was larger, and the western region is expected to be the main area of improvement in the future. ③ The results of single-factor detection showed that the influences on RSEI values were, in descending order, precipitation, soil type, land use type, air temperature, vegetation type, elevation, population density, GDP, and nighttime lighting; the interactions among driving factors on RSEI changes showed a bivariate or nonlinear enhancement, which suggests that the interactions of each driving factor could improve the explanatory power of spatial variations in ecological environment quality. ④ Based on the coupling of human activity intensity and ecological environment quality, the 12 league cities of Inner Mongolia were divided into ecological development coordination zones, ecological development reserves, and ecological development risk zones. This study can provide a scientific basis for ecological environmental protection and sustainable development in Inner Mongolia.
Additional Links: PMID-40677066
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40677066,
year = {2025},
author = {Zhao, N and Wang, B and Wang, ZH and Zhang, QL},
title = {[Spatiotemporal Evolution of Ecological Environment Quality and Ecological Management Zoning in Inner Mongolia Based on RSEI].},
journal = {Huan jing ke xue= Huanjing kexue},
volume = {46},
number = {7},
pages = {4499-4509},
doi = {10.13227/j.hjkx.202405303},
pmid = {40677066},
issn = {0250-3301},
mesh = {China ; *Ecosystem ; *Remote Sensing Technology ; *Environmental Monitoring/methods ; *Conservation of Natural Resources ; Spatio-Temporal Analysis ; Ecology ; },
abstract = {Inner Mongolia serves as a crucial ecological security barrier for northern China. Examining the spatial and temporal evolution of ecological environment quality, along with the zoning for ecological management, is crucial for enhancing the management and development of ecological environments. Based on the Google Earth Engine cloud platform, four indicators-heat, greenness, dryness, and wetness-were extracted from MODIS remote sensing image data spanning 2000 to 2023. The remote sensing ecological index (RESI) model was constructed using principal component analysis. By combining the coefficient of variation (CV), Sen + Mann-Kendall, and Hurst indices, the spatial and temporal variations and future trends of ecological environmental quality of the Inner Mongolia were analyzed. The influencing mechanisms were explored using a geographical detector, and the quadrant method was employed for ecological management zoning based on the intensity of human activities and the quality of the ecological environment. The results indicated that: ① The ecological environment quality of Inner Mongolia from 2000 to 2023 was mainly characterized as poor to average, with a spatial trend of decreasing quality from east to west. From 2000 to 2005, Inner Mongolia experienced environmental degradation, followed by a gradual improvement in ecological environment quality. ② Inner Mongolia exhibited the largest area of non-significantly improved and non-significantly degraded regions, and the overall environmental quality was more stable. However, ecosystems in the western region were more fragile and prone to fluctuations. The area of sustained degradation versus sustained improvement in the future trend of change was larger, and the western region is expected to be the main area of improvement in the future. ③ The results of single-factor detection showed that the influences on RSEI values were, in descending order, precipitation, soil type, land use type, air temperature, vegetation type, elevation, population density, GDP, and nighttime lighting; the interactions among driving factors on RSEI changes showed a bivariate or nonlinear enhancement, which suggests that the interactions of each driving factor could improve the explanatory power of spatial variations in ecological environment quality. ④ Based on the coupling of human activity intensity and ecological environment quality, the 12 league cities of Inner Mongolia were divided into ecological development coordination zones, ecological development reserves, and ecological development risk zones. This study can provide a scientific basis for ecological environmental protection and sustainable development in Inner Mongolia.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
China
*Ecosystem
*Remote Sensing Technology
*Environmental Monitoring/methods
*Conservation of Natural Resources
Spatio-Temporal Analysis
Ecology
RevDate: 2025-07-18
[Quantitative Analysis of Wetland Evolution Characteristics and Driving Factors in Ruoergai Plateau Based on Landsat Time Series Remote Sensing Images].
Huan jing ke xue= Huanjing kexue, 46(7):4461-4472.
The Ruoergai Wetland, China's largest high-altitude marsh, plays a crucial role in the carbon cycle and climate management. However, the Ruoergai Wetland has experienced significant damage as a result of human activity and global warming. Based on the Google Earth Engine (GEE) cloud platform and time-series Landsat images, a random forest algorithm was applied to produce a detailed classification map of the Ruoergai wetlands from 1990 to 2020. Through the transfer matrix and landscape pattern index, the spatiotemporal change law and change trend of wetlands were analyzed. Then, the influencing factors of wetland distribution were quantitatively analyzed using geographic detector. The results showed that: ① The total wetland area averaged 3 910 km[2] from 1990 to 2020, dominated by marshy and wet meadows, accounting for 83.13% of the total wetland area. From 1990 to 2010, the wetland area of Ruoergai showed a decreasing trend, and from 2010 to 2020, the wetland area increased slightly. ② From 1990 to 2020, the decrease in wetland area was mainly reflected in the degradation of wet meadows into alpine grassland. There were also changes among different wetland types, which were mainly reflected in the conversion of marsh meadows and wet meadows. ③ From 1990 to 2010, the wetland landscape tended to be fragmented and complicated, and the aggregation degree decreased. From 2010 to 2020, wetland fragmentation decreased, and the wetland landscape became more concentrated. ④ Slope, temperature, and aspect were the main natural factors affecting wetland distribution. At the same time, population density has gradually become a significant social and economic factor affecting wetland distribution. The results can provide scientific support for the wetland protection planning of Ruoergai and serve for the ecological preservation and high-level development of the area.
Additional Links: PMID-40677063
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40677063,
year = {2025},
author = {Yang, M and Liu, EQ and Yang, Y and Gao, B and Guan, L},
title = {[Quantitative Analysis of Wetland Evolution Characteristics and Driving Factors in Ruoergai Plateau Based on Landsat Time Series Remote Sensing Images].},
journal = {Huan jing ke xue= Huanjing kexue},
volume = {46},
number = {7},
pages = {4461-4472},
doi = {10.13227/j.hjkx.202406190},
pmid = {40677063},
issn = {0250-3301},
abstract = {The Ruoergai Wetland, China's largest high-altitude marsh, plays a crucial role in the carbon cycle and climate management. However, the Ruoergai Wetland has experienced significant damage as a result of human activity and global warming. Based on the Google Earth Engine (GEE) cloud platform and time-series Landsat images, a random forest algorithm was applied to produce a detailed classification map of the Ruoergai wetlands from 1990 to 2020. Through the transfer matrix and landscape pattern index, the spatiotemporal change law and change trend of wetlands were analyzed. Then, the influencing factors of wetland distribution were quantitatively analyzed using geographic detector. The results showed that: ① The total wetland area averaged 3 910 km[2] from 1990 to 2020, dominated by marshy and wet meadows, accounting for 83.13% of the total wetland area. From 1990 to 2010, the wetland area of Ruoergai showed a decreasing trend, and from 2010 to 2020, the wetland area increased slightly. ② From 1990 to 2020, the decrease in wetland area was mainly reflected in the degradation of wet meadows into alpine grassland. There were also changes among different wetland types, which were mainly reflected in the conversion of marsh meadows and wet meadows. ③ From 1990 to 2010, the wetland landscape tended to be fragmented and complicated, and the aggregation degree decreased. From 2010 to 2020, wetland fragmentation decreased, and the wetland landscape became more concentrated. ④ Slope, temperature, and aspect were the main natural factors affecting wetland distribution. At the same time, population density has gradually become a significant social and economic factor affecting wetland distribution. The results can provide scientific support for the wetland protection planning of Ruoergai and serve for the ecological preservation and high-level development of the area.},
}
RevDate: 2025-07-18
CmpDate: 2025-07-16
Colorectal cancer unmasked: A synergistic AI framework for Hyper-granular image dissection, precision segmentation, and automated diagnosis.
BMC medical imaging, 25(1):283.
Colorectal cancer (CRC) is the second most common cause of cancer-related mortality worldwide, underscoring the necessity for computer-aided diagnosis (CADx) systems that are interpretable, accurate, and robust. This study presents a practical CADx system that combines Vision Transformers (ViTs) and DeepLabV3 + to accurately identify and segment colorectal lesions in colonoscopy images.The system addresses class balance and real-world complexity with PCA-based dimensionality reduction, data augmentation, and strategic preprocessing using recently curated CKHK-22 dataset comprising more than 14,000 annotated images of CVC-ClinicDB, Kvasir-2, and Hyper-Kvasir. ViT, ResNet-50, DenseNet-201, and VGG-16 were used to quantify classification performance. ViT achieved best-in-class accuracy (97%), F1-score (0.95), and AUC (92%) in test data. The DeepLabV3 + achieved segmentation state-of-the-art for tasks of localisation with 0.88 Dice Coefficient and 0.71 Intersection over Union (IoU), ensuring sharp delineation of areas that are malignant. The CADx system accommodates real-time inference and served through Google Cloud for information that accommodates scalable clinical implementation. The image-level segmentation effectiveness is evidenced by comparison with visual overlay and expert-manually deliminated masks, and its precision is illustrated by computation of precision, recall, F1-score, and AUC. The hybrid strategy not only outperforms traditional CNN strategies but also overcomes important clinical needs such as detection early, balance of highly disparate classes, and clear explanation. The proposed ViT-DeepLabV3 + system establishes a basis for advanced AI support to colorectal diagnosis by utilizing self-attention strategies and learning with different scales of context. The system offers a high-capacity, reproducible computerised colorectal cancer screening and monitoring solution and can be best deployed where resources are scarce, and it can be highly desirable for clinical deployment.
Additional Links: PMID-40665235
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40665235,
year = {2025},
author = {Narasimha Raju, AS and Venkatesh, K and Rajababu, M and Kumar Gatla, R and Jakeer Hussain, S and Satya Mohan Chowdary, G and Ganga Bhavani, T and Kareemullah, M and Algburi, S and Majdi, A and Abdulhadi, AM and Ahmad Khan, W},
title = {Colorectal cancer unmasked: A synergistic AI framework for Hyper-granular image dissection, precision segmentation, and automated diagnosis.},
journal = {BMC medical imaging},
volume = {25},
number = {1},
pages = {283},
pmid = {40665235},
issn = {1471-2342},
mesh = {Humans ; *Colorectal Neoplasms/diagnostic imaging ; *Diagnosis, Computer-Assisted/methods ; Colonoscopy ; *Image Interpretation, Computer-Assisted/methods ; },
abstract = {Colorectal cancer (CRC) is the second most common cause of cancer-related mortality worldwide, underscoring the necessity for computer-aided diagnosis (CADx) systems that are interpretable, accurate, and robust. This study presents a practical CADx system that combines Vision Transformers (ViTs) and DeepLabV3 + to accurately identify and segment colorectal lesions in colonoscopy images.The system addresses class balance and real-world complexity with PCA-based dimensionality reduction, data augmentation, and strategic preprocessing using recently curated CKHK-22 dataset comprising more than 14,000 annotated images of CVC-ClinicDB, Kvasir-2, and Hyper-Kvasir. ViT, ResNet-50, DenseNet-201, and VGG-16 were used to quantify classification performance. ViT achieved best-in-class accuracy (97%), F1-score (0.95), and AUC (92%) in test data. The DeepLabV3 + achieved segmentation state-of-the-art for tasks of localisation with 0.88 Dice Coefficient and 0.71 Intersection over Union (IoU), ensuring sharp delineation of areas that are malignant. The CADx system accommodates real-time inference and served through Google Cloud for information that accommodates scalable clinical implementation. The image-level segmentation effectiveness is evidenced by comparison with visual overlay and expert-manually deliminated masks, and its precision is illustrated by computation of precision, recall, F1-score, and AUC. The hybrid strategy not only outperforms traditional CNN strategies but also overcomes important clinical needs such as detection early, balance of highly disparate classes, and clear explanation. The proposed ViT-DeepLabV3 + system establishes a basis for advanced AI support to colorectal diagnosis by utilizing self-attention strategies and learning with different scales of context. The system offers a high-capacity, reproducible computerised colorectal cancer screening and monitoring solution and can be best deployed where resources are scarce, and it can be highly desirable for clinical deployment.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Colorectal Neoplasms/diagnostic imaging
*Diagnosis, Computer-Assisted/methods
Colonoscopy
*Image Interpretation, Computer-Assisted/methods
RevDate: 2025-07-18
CmpDate: 2025-07-16
A hybrid fog-edge computing architecture for real-time health monitoring in IoMT systems with optimized latency and threat resilience.
Scientific reports, 15(1):25655.
The advancement of the Internet of Medical Things (IoMT) has transformed healthcare delivery by enabling real-time health monitoring. However, it introduces critical challenges related to latency and, more importantly, the secure handling of sensitive patient data. Traditional cloud-based architectures often struggle with latency and data protection, making them inefficient for real-time healthcare scenarios. To address these challenges, we propose a Hybrid Fog-Edge Computing Architecture tailored for effective real-time health monitoring in IoMT systems. Fog computing enables processing of time-critical data closer to the data source, reducing response time and relieving cloud system overload. Simultaneously, edge computing nodes handle data preprocessing and transmit only valuable information-defined as abnormal or high-risk health signals such as irregular heart rate or oxygen levels-using rule-based filtering, statistical thresholds, and lightweight machine learning models like Decision Trees and One-Class SVMs. This selective transmission optimizes bandwidth without compromising response quality. The architecture integrates robust security measures, including end-to-end encryption and distributed authentication, to counter rising data breaches and unauthorized access in IoMT networks. Real-life case scenarios and simulations are used to validate the model, evaluating latency reduction, data consolidation, and scalability. Results demonstrate that the proposed architecture significantly outperforms cloud-only models, with a 70% latency reduction, 30% improvement in energy efficiency, and 60% bandwidth savings. Additionally, the time required for threat detection was halved, ensuring faster response to security incidents. This framework offers a flexible, secure, and efficient solution ideal for time-sensitive healthcare applications such as remote patient monitoring and emergency response systems.
Additional Links: PMID-40665167
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40665167,
year = {2025},
author = {Islam, U and Alatawi, MN and Alqazzaz, A and Alamro, S and Shah, B and Moreira, F},
title = {A hybrid fog-edge computing architecture for real-time health monitoring in IoMT systems with optimized latency and threat resilience.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {25655},
pmid = {40665167},
issn = {2045-2322},
mesh = {Humans ; *Internet of Things ; *Cloud Computing ; *Computer Security ; Monitoring, Physiologic/methods ; },
abstract = {The advancement of the Internet of Medical Things (IoMT) has transformed healthcare delivery by enabling real-time health monitoring. However, it introduces critical challenges related to latency and, more importantly, the secure handling of sensitive patient data. Traditional cloud-based architectures often struggle with latency and data protection, making them inefficient for real-time healthcare scenarios. To address these challenges, we propose a Hybrid Fog-Edge Computing Architecture tailored for effective real-time health monitoring in IoMT systems. Fog computing enables processing of time-critical data closer to the data source, reducing response time and relieving cloud system overload. Simultaneously, edge computing nodes handle data preprocessing and transmit only valuable information-defined as abnormal or high-risk health signals such as irregular heart rate or oxygen levels-using rule-based filtering, statistical thresholds, and lightweight machine learning models like Decision Trees and One-Class SVMs. This selective transmission optimizes bandwidth without compromising response quality. The architecture integrates robust security measures, including end-to-end encryption and distributed authentication, to counter rising data breaches and unauthorized access in IoMT networks. Real-life case scenarios and simulations are used to validate the model, evaluating latency reduction, data consolidation, and scalability. Results demonstrate that the proposed architecture significantly outperforms cloud-only models, with a 70% latency reduction, 30% improvement in energy efficiency, and 60% bandwidth savings. Additionally, the time required for threat detection was halved, ensuring faster response to security incidents. This framework offers a flexible, secure, and efficient solution ideal for time-sensitive healthcare applications such as remote patient monitoring and emergency response systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Internet of Things
*Cloud Computing
*Computer Security
Monitoring, Physiologic/methods
RevDate: 2025-07-18
Adaptive conflict resolution for IoT transactions: A reinforcement learning-based hybrid validation protocol.
Scientific reports, 15(1):25589.
This paper introduces a novel Reinforcement Learning-Based Hybrid Validation Protocol (RL-CC) that revolutionizes conflict resolution for time-sensitive IoT transactions through adaptive edge-cloud coordination. Efficient transaction management in sensor-based systems is crucial for maintaining data integrity and ensuring timely execution within the constraints of temporal validity. Our key innovation lies in dynamically learning optimal scheduling policies that minimize transaction aborts while maximizing throughput under varying workload conditions. The protocol consists of two validation phases: an edge validation phase, where transactions undergo preliminary conflict detection and prioritization based on their temporal constraints, and a cloud validation phase, where a final conflict resolution mechanism ensures transactional correctness on a global scale. The RL-based mechanism continuously adapts decision-making by learning from system states, prioritizing transactions, and dynamically resolving conflicts using a reward function that accounts for key performance parameters, including the number of conflicting transactions, cost of aborting transactions, temporal validity constraints, and system resource utilization. Experimental results demonstrate that our RL-CC protocol achieves a 90% reduction in transaction abort rates (5% vs. 45% for 2PL), 3x higher throughput (300 TPS vs. 100 TPS), and 70% lower latency compared to traditional concurrency control methods. The proposed RL-CC protocol significantly reduces transaction abort rates, enhances concurrency management, and improves the efficiency of sensor data processing by ensuring that transactions are executed within their temporal validity window. The results suggest that the RL-based approach offers a scalable and adaptive solution for sensor-based applications requiring high-concurrency transaction processing, such as Internet of Things (IoT) networks, real-time monitoring systems, and cyber-physical infrastructures.
Additional Links: PMID-40665124
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40665124,
year = {2025},
author = {Khaldy, MAA and Nabot, A and Al-Qerem, A and Jebreen, I and Darem, AA and Alhashmi, AA and Alauthman, M and Aldweesh, A},
title = {Adaptive conflict resolution for IoT transactions: A reinforcement learning-based hybrid validation protocol.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {25589},
pmid = {40665124},
issn = {2045-2322},
abstract = {This paper introduces a novel Reinforcement Learning-Based Hybrid Validation Protocol (RL-CC) that revolutionizes conflict resolution for time-sensitive IoT transactions through adaptive edge-cloud coordination. Efficient transaction management in sensor-based systems is crucial for maintaining data integrity and ensuring timely execution within the constraints of temporal validity. Our key innovation lies in dynamically learning optimal scheduling policies that minimize transaction aborts while maximizing throughput under varying workload conditions. The protocol consists of two validation phases: an edge validation phase, where transactions undergo preliminary conflict detection and prioritization based on their temporal constraints, and a cloud validation phase, where a final conflict resolution mechanism ensures transactional correctness on a global scale. The RL-based mechanism continuously adapts decision-making by learning from system states, prioritizing transactions, and dynamically resolving conflicts using a reward function that accounts for key performance parameters, including the number of conflicting transactions, cost of aborting transactions, temporal validity constraints, and system resource utilization. Experimental results demonstrate that our RL-CC protocol achieves a 90% reduction in transaction abort rates (5% vs. 45% for 2PL), 3x higher throughput (300 TPS vs. 100 TPS), and 70% lower latency compared to traditional concurrency control methods. The proposed RL-CC protocol significantly reduces transaction abort rates, enhances concurrency management, and improves the efficiency of sensor data processing by ensuring that transactions are executed within their temporal validity window. The results suggest that the RL-based approach offers a scalable and adaptive solution for sensor-based applications requiring high-concurrency transaction processing, such as Internet of Things (IoT) networks, real-time monitoring systems, and cyber-physical infrastructures.},
}
RevDate: 2025-07-17
Long-read microbial genome assembly, gene prediction and functional annotation: a service of the MIRRI ERIC Italian node.
Frontiers in bioinformatics, 5:1632189.
BACKGROUND: Understanding the structure and function of microbial genomes is crucial for uncovering their ecological roles, evolutionary trajectories, and potential applications in health, biotechnology, agriculture, food production, and environmental science. However, genome reconstruction and annotation remain computationally demanding and technically complex.
RESULTS: We introduce a bioinformatics platform designed explicitly for long-read microbial sequencing data to address these challenges. Developed as a service of the Italian MIRRI ERIC node, the platform provides a comprehensive solution for analyzing both prokaryotic and eukaryotic genomes, from assembly to functional protein annotation. It integrates state-of-the-art tools (e.g., Canu, Flye, BRAKER3, Prokka, InterProScan) within a reproducible, scalable workflow built on the Common Workflow Language and accelerated through high-performance computing infrastructure. A user-friendly web interface ensures accessibility, even for non-specialists.
CONCLUSION: Through case studies involving three environmentally and clinically significant microorganisms, we demonstrate the ability of the platform to produce reliable, biologically meaningful insights, positioning it as a valuable tool for routine genome analysis and advanced microbial research.
Additional Links: PMID-40662129
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40662129,
year = {2025},
author = {Contaldo, SG and d'Acierno, A and Bosio, L and Venice, F and Perottino, EL and Hoyos Rea, JE and Varese, GC and Cordero, F and Beccuti, M},
title = {Long-read microbial genome assembly, gene prediction and functional annotation: a service of the MIRRI ERIC Italian node.},
journal = {Frontiers in bioinformatics},
volume = {5},
number = {},
pages = {1632189},
pmid = {40662129},
issn = {2673-7647},
abstract = {BACKGROUND: Understanding the structure and function of microbial genomes is crucial for uncovering their ecological roles, evolutionary trajectories, and potential applications in health, biotechnology, agriculture, food production, and environmental science. However, genome reconstruction and annotation remain computationally demanding and technically complex.
RESULTS: We introduce a bioinformatics platform designed explicitly for long-read microbial sequencing data to address these challenges. Developed as a service of the Italian MIRRI ERIC node, the platform provides a comprehensive solution for analyzing both prokaryotic and eukaryotic genomes, from assembly to functional protein annotation. It integrates state-of-the-art tools (e.g., Canu, Flye, BRAKER3, Prokka, InterProScan) within a reproducible, scalable workflow built on the Common Workflow Language and accelerated through high-performance computing infrastructure. A user-friendly web interface ensures accessibility, even for non-specialists.
CONCLUSION: Through case studies involving three environmentally and clinically significant microorganisms, we demonstrate the ability of the platform to produce reliable, biologically meaningful insights, positioning it as a valuable tool for routine genome analysis and advanced microbial research.},
}
RevDate: 2025-07-17
Towards a secure cloud repository architecture for the continuous monitoring of patients with mental disorders.
Frontiers in digital health, 7:1567702.
INTRODUCTION: Advances in Information Technology are transforming healthcare systems, with a focus on improving accessibility, efficiency, resilience, and service quality. Wearable devices such as smartwatches and mental health trackers enable continuous biometric data collection, offering significant potential to enhance chronic disorder treatment and overall healthcare quality. However, these technologies introduce critical security and privacy risks, as they handle sensitive patient data.
METHODS: To address these challenges, this paper proposes a security-by-design cloud-based architecture that leverages wearable body sensors for continuous patient monitoring and mental disorder prediction. The system integrates an Elasticsearch-powered backend to manage biometric data securely. A dedicated framework was developed to ensure confidentiality, integrity, and availability (CIA) of patient data through secure communication protocols and privacy-preserving mechanisms.
RESULTS: The proposed architecture successfully enables secure real-time biometric monitoring and data processing from wearable devices. The system is designed to operate 24/7, ensuring robust performance in continuously tracking both mental and physiological health indicators. The inclusion of Elasticsearch provides scalable and efficient data indexing and retrieval, supporting timely healthcare decisions.
DISCUSSION: This work addresses key security and privacy challenges inherent in continuous biometric data collection. By incorporating a security-by-design approach, the proposed framework enhances trustworthiness in healthcare monitoring technologies. The solution demonstrates the feasibility of balancing real-time health monitoring needs with stringent data protection requirements.
Additional Links: PMID-40661652
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40661652,
year = {2025},
author = {Georgiou, D and Katsaounis, S and Tsanakas, P and Maglogiannis, I and Gallos, P},
title = {Towards a secure cloud repository architecture for the continuous monitoring of patients with mental disorders.},
journal = {Frontiers in digital health},
volume = {7},
number = {},
pages = {1567702},
pmid = {40661652},
issn = {2673-253X},
abstract = {INTRODUCTION: Advances in Information Technology are transforming healthcare systems, with a focus on improving accessibility, efficiency, resilience, and service quality. Wearable devices such as smartwatches and mental health trackers enable continuous biometric data collection, offering significant potential to enhance chronic disorder treatment and overall healthcare quality. However, these technologies introduce critical security and privacy risks, as they handle sensitive patient data.
METHODS: To address these challenges, this paper proposes a security-by-design cloud-based architecture that leverages wearable body sensors for continuous patient monitoring and mental disorder prediction. The system integrates an Elasticsearch-powered backend to manage biometric data securely. A dedicated framework was developed to ensure confidentiality, integrity, and availability (CIA) of patient data through secure communication protocols and privacy-preserving mechanisms.
RESULTS: The proposed architecture successfully enables secure real-time biometric monitoring and data processing from wearable devices. The system is designed to operate 24/7, ensuring robust performance in continuously tracking both mental and physiological health indicators. The inclusion of Elasticsearch provides scalable and efficient data indexing and retrieval, supporting timely healthcare decisions.
DISCUSSION: This work addresses key security and privacy challenges inherent in continuous biometric data collection. By incorporating a security-by-design approach, the proposed framework enhances trustworthiness in healthcare monitoring technologies. The solution demonstrates the feasibility of balancing real-time health monitoring needs with stringent data protection requirements.},
}
RevDate: 2025-07-16
Visualization of the Evolution and Transmission of Circulating Vaccine-Derived Poliovirus (cVDPV) Outbreaks in the African Region.
Bio-protocol, 15(13):e5376.
Since the creation of the Global Polio Eradication Initiative (GPEI) in 1988, significant progress has been made toward attaining a poliovirus-free world. This has resulted in the eradication of wild poliovirus (WPV) serotypes two (WPV2) and three (WPV3) and limited transmission of serotype one (WPV1) in Pakistan and Afghanistan. However, the increased emergence of circulating vaccine-derived poliovirus (cVDPV) and the continued circulation of WPV1, although limited to two countries, pose a continuous threat of international spread of poliovirus. These challenges highlight the need to further strengthen surveillance and outbreak responses, particularly in the African Region (AFRO). Phylogeographic visualization tools may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective supplementary immunization activities and improved outbreak response and surveillance. We created a comprehensive protocol for the phylogeographic analysis of polioviruses using Nextstrain, a powerful open-source tool for real-time interactive visualization of virus sequencing data. It is expected that this protocol will support poliovirus elimination strategies in AFRO and contribute significantly to global eradication strategies. These tools have been utilized for other pathogens of public health importance, for example, SARS-CoV-2, human influenza, Ebola, and Mpox, among others, through real-time tracking of pathogen evolution (https://nextstrain.org), harnessing the scientific and public health potential of pathogen genome data. Key features • Employs Nextstrain (https://nextstrain.org), which is an open-source tool for real-time interactive visualization of genome sequencing datasets. • First comprehensive protocol for the phylogeographic analysis of poliovirus sequences collected from countries in the World Health Organization (WHO) African Region (AFRO). • Phylogeographic visualization may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective vaccination campaigns. • This protocol can be deployed locally on a personal computer or on a Microsoft Azure cloud server for high throughput.
Additional Links: PMID-40655424
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40655424,
year = {2025},
author = {Owuor, CD and Tesfaye, B and Wakem, AYD and Kabore, S and Ikeonu, CO and Doussoh, MEFGE and Sigala, PEMB and Ibrahim, II and Jimoh, A and Ndumba, I and Khumalo, J and Oviaesu, DO and Kipchirchir, C and Gathenji, C and Kipterer, J and Touray, K and Abdullahi, H and Rankin, K and Diop, OM and Chia, JE and Modjirom, N and Ahmed, JA and Kfutwah, AKW},
title = {Visualization of the Evolution and Transmission of Circulating Vaccine-Derived Poliovirus (cVDPV) Outbreaks in the African Region.},
journal = {Bio-protocol},
volume = {15},
number = {13},
pages = {e5376},
pmid = {40655424},
issn = {2331-8325},
abstract = {Since the creation of the Global Polio Eradication Initiative (GPEI) in 1988, significant progress has been made toward attaining a poliovirus-free world. This has resulted in the eradication of wild poliovirus (WPV) serotypes two (WPV2) and three (WPV3) and limited transmission of serotype one (WPV1) in Pakistan and Afghanistan. However, the increased emergence of circulating vaccine-derived poliovirus (cVDPV) and the continued circulation of WPV1, although limited to two countries, pose a continuous threat of international spread of poliovirus. These challenges highlight the need to further strengthen surveillance and outbreak responses, particularly in the African Region (AFRO). Phylogeographic visualization tools may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective supplementary immunization activities and improved outbreak response and surveillance. We created a comprehensive protocol for the phylogeographic analysis of polioviruses using Nextstrain, a powerful open-source tool for real-time interactive visualization of virus sequencing data. It is expected that this protocol will support poliovirus elimination strategies in AFRO and contribute significantly to global eradication strategies. These tools have been utilized for other pathogens of public health importance, for example, SARS-CoV-2, human influenza, Ebola, and Mpox, among others, through real-time tracking of pathogen evolution (https://nextstrain.org), harnessing the scientific and public health potential of pathogen genome data. Key features • Employs Nextstrain (https://nextstrain.org), which is an open-source tool for real-time interactive visualization of genome sequencing datasets. • First comprehensive protocol for the phylogeographic analysis of poliovirus sequences collected from countries in the World Health Organization (WHO) African Region (AFRO). • Phylogeographic visualization may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective vaccination campaigns. • This protocol can be deployed locally on a personal computer or on a Microsoft Azure cloud server for high throughput.},
}
RevDate: 2025-07-16
Towards Intelligent Safety: A Systematic Review on Assault Detection and Technologies.
Sensors (Basel, Switzerland), 25(13):.
This review of literature discusses the use of emerging technologies in the prevention of assault, specifically Artificial Intelligence (AI), the Internet of Things (IoT), and wearable technologies. In preventing assaults, GIS-based mobile apps, wearable safety devices, and personal security solutions have been designed to improve personal security, especially for women and the vulnerable. The paper also analyzes interfacing networks, such as edge computing, cloud databases, and security frameworks required for emergency response solutions. In addition, we introduced a framework that brings these technologies together to deliver an effective response system. This review seeks to identify gaps currently present, ascertain major challenges, and suggest potential directions for enhanced personal security with the use of technology.
Additional Links: PMID-40648240
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40648240,
year = {2025},
author = {Shyam Sundar Bhuvaneswari, VS and Thangamuthu, M},
title = {Towards Intelligent Safety: A Systematic Review on Assault Detection and Technologies.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {13},
pages = {},
pmid = {40648240},
issn = {1424-8220},
abstract = {This review of literature discusses the use of emerging technologies in the prevention of assault, specifically Artificial Intelligence (AI), the Internet of Things (IoT), and wearable technologies. In preventing assaults, GIS-based mobile apps, wearable safety devices, and personal security solutions have been designed to improve personal security, especially for women and the vulnerable. The paper also analyzes interfacing networks, such as edge computing, cloud databases, and security frameworks required for emergency response solutions. In addition, we introduced a framework that brings these technologies together to deliver an effective response system. This review seeks to identify gaps currently present, ascertain major challenges, and suggest potential directions for enhanced personal security with the use of technology.},
}
RevDate: 2025-07-16
Multi-Area, Multi-Service and Multi-Tier Edge-Cloud Continuum Planning.
Sensors (Basel, Switzerland), 25(13):.
This paper presents the optimal planning of multi-area, multi-service, and multi-tier edge-cloud environments. The goal is to evaluate the regional deployment of the compute continuum, i.e., the type and number of processing devices, their pairing with a specific tier and task among different areas subject to processing, rate, and latency requirements. Different offline compute continuum planning approaches are investigated and detailed analysis related to various design choices is depicted. We study one scheme using all tasks at once and two others using smaller task batches. The latter both iterative schemes finish once all task groups have been traversed. Group-based approaches are presented as dealing with potentially excessive execution times for real-world sized problems. Solutions are provided for continuum planning using both direct complex and simpler, faster methods. Results show that processing all tasks simultaneously yields better performance but requires longer execution, while medium-sized batches achieve good performance faster. Thus, the batch-oriented schemes are capable of handling larger problem sizes. Moreover, the task selection strategy in group-based schemes influences the performance. A more detailed analysis is performed in the latter case, and different clustering methods are also considered. Based on our simulations, random selection of tasks in group-based approaches achieves better performance in most cases.
Additional Links: PMID-40648206
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40648206,
year = {2025},
author = {Roumeliotis, AJ and Myritzis, E and Kosmatos, E and Katsaros, KV and Amditis, AJ},
title = {Multi-Area, Multi-Service and Multi-Tier Edge-Cloud Continuum Planning.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {13},
pages = {},
pmid = {40648206},
issn = {1424-8220},
support = {Grant Agreement Number 101060294//European Union's (EU) Horizon Europe research and innovation programme XGain/ ; },
abstract = {This paper presents the optimal planning of multi-area, multi-service, and multi-tier edge-cloud environments. The goal is to evaluate the regional deployment of the compute continuum, i.e., the type and number of processing devices, their pairing with a specific tier and task among different areas subject to processing, rate, and latency requirements. Different offline compute continuum planning approaches are investigated and detailed analysis related to various design choices is depicted. We study one scheme using all tasks at once and two others using smaller task batches. The latter both iterative schemes finish once all task groups have been traversed. Group-based approaches are presented as dealing with potentially excessive execution times for real-world sized problems. Solutions are provided for continuum planning using both direct complex and simpler, faster methods. Results show that processing all tasks simultaneously yields better performance but requires longer execution, while medium-sized batches achieve good performance faster. Thus, the batch-oriented schemes are capable of handling larger problem sizes. Moreover, the task selection strategy in group-based schemes influences the performance. A more detailed analysis is performed in the latter case, and different clustering methods are also considered. Based on our simulations, random selection of tasks in group-based approaches achieves better performance in most cases.},
}
RevDate: 2025-07-13
Ranking data privacy techniques in cloud computing based on Tamir's complex fuzzy Schweizer-Sklar aggregation approach.
Scientific reports, 15(1):24943 pii:10.1038/s41598-025-09557-z.
In the era of cloud computing, it has become an important challenge to secure data privacy by storing and processing massive amounts of sensitive information in shared environments. Cloud platforms have become a necessary component for managing personal, commercial, and governmental data. Thus, the demand for effective data privacy techniques within cloud security frameworks has increased. Data privacy is no longer just an exercise in compliance but rather to reassure stakeholders and protect precious information from cyber-attacks. The decision-making (DM) landscape in the case of cloud providers, therefore, is extremely complex because they would need to select the optimal approach among the very wide gamut of privacy techniques, which range from encryption to anonymization. A novel complex fuzzy Schweizer-Sklar aggregation approach can rank and prioritize data privacy techniques and is particularly suitable for cloud settings. Our method can easily deal with uncertainties and multi-dimensional aspects of privacy evaluation. In this manuscript, first, we introduce the fundamental Schweizer-Sklar operational laws for a cartesian form of complex fuzzy framework. Then relying on these operational laws, we have initiated the notions of cartesian form of complex fuzzy Schweizer-Sklar power average and complex fuzzy Schweizer-Sklar power geometric AOs. We have developed the main properties related to these notions like Idempotency, Boundedness, and monotonicity. Also, we explored an algorithm for the utilization of the developed theory. Moreover, we provided an illustrative example and case study for the developed theory to show the ranking of data privacy techniques in cloud computing. At the end of the manuscript, we discuss the comparative analysis to show the supremacy of the introduced work.
Additional Links: PMID-40640370
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40640370,
year = {2025},
author = {Ahmmad, J and El-Wahed Khalifa, HA and Waqas, HM and Alburaikan, A and Radwan, T},
title = {Ranking data privacy techniques in cloud computing based on Tamir's complex fuzzy Schweizer-Sklar aggregation approach.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {24943},
doi = {10.1038/s41598-025-09557-z},
pmid = {40640370},
issn = {2045-2322},
abstract = {In the era of cloud computing, it has become an important challenge to secure data privacy by storing and processing massive amounts of sensitive information in shared environments. Cloud platforms have become a necessary component for managing personal, commercial, and governmental data. Thus, the demand for effective data privacy techniques within cloud security frameworks has increased. Data privacy is no longer just an exercise in compliance but rather to reassure stakeholders and protect precious information from cyber-attacks. The decision-making (DM) landscape in the case of cloud providers, therefore, is extremely complex because they would need to select the optimal approach among the very wide gamut of privacy techniques, which range from encryption to anonymization. A novel complex fuzzy Schweizer-Sklar aggregation approach can rank and prioritize data privacy techniques and is particularly suitable for cloud settings. Our method can easily deal with uncertainties and multi-dimensional aspects of privacy evaluation. In this manuscript, first, we introduce the fundamental Schweizer-Sklar operational laws for a cartesian form of complex fuzzy framework. Then relying on these operational laws, we have initiated the notions of cartesian form of complex fuzzy Schweizer-Sklar power average and complex fuzzy Schweizer-Sklar power geometric AOs. We have developed the main properties related to these notions like Idempotency, Boundedness, and monotonicity. Also, we explored an algorithm for the utilization of the developed theory. Moreover, we provided an illustrative example and case study for the developed theory to show the ranking of data privacy techniques in cloud computing. At the end of the manuscript, we discuss the comparative analysis to show the supremacy of the introduced work.},
}
RevDate: 2025-07-13
Aqua-MC as a simple open access code for uncountable runs of AquaCrop.
Scientific reports, 15(1):24975.
Understanding uncertainty in crop modeling is essential for improving prediction accuracy and decision-making in agricultural management. Monte Carlo simulations are widely used for uncertainty and sensitivity analysis, but their application to closed-source models like AquaCrop presents significant challenges due to the lack of direct access to source code. This study introduces Aqua-MC, an automated framework designed to facilitate Monte Carlo simulations in AquaCrop by integrating probabilistic parameter selection, iterative execution, and uncertainty quantification within a structured workflow. To demonstrate its effectiveness, Aqua-MC was applied to wheat yield modeling in Qazvin, Iran, where parameter uncertainty was assessed using 3000 Monte Carlo simulations. The DYNIA (Dynamic Identifiability Analysis) method was employed to evaluate the time-dependent sensitivity of 47 model parameters, providing insights into the temporal evolution of parameter influence. The results revealed that soil evaporation and yield predictions exhibited the highest uncertainty, while transpiration and biomass outputs were more stable. The study also highlighted that many parameters had low impact, suggesting that reducing the number of free parameters could enhance model efficiency. Despite its advantages, Aqua-MC has some limitations, including its computational intensity and reliance on the GLUE method, which may overestimate uncertainty bounds. To improve applicability, future research should focus on parallel computing, cloud-based execution, integration with machine learning techniques, and expanding Aqua-MC to multi-crop studies. By overcoming the limitations of closed-source models, Aqua-MC provides a scalable and efficient solution for performing large-scale uncertainty analysis in crop modeling.
Additional Links: PMID-40640355
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40640355,
year = {2025},
author = {Adabi, V and Etedali, HR and Azizian, A and Gorginpaveh, F and Salem, A and Elbeltagi, A},
title = {Aqua-MC as a simple open access code for uncountable runs of AquaCrop.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {24975},
pmid = {40640355},
issn = {2045-2322},
abstract = {Understanding uncertainty in crop modeling is essential for improving prediction accuracy and decision-making in agricultural management. Monte Carlo simulations are widely used for uncertainty and sensitivity analysis, but their application to closed-source models like AquaCrop presents significant challenges due to the lack of direct access to source code. This study introduces Aqua-MC, an automated framework designed to facilitate Monte Carlo simulations in AquaCrop by integrating probabilistic parameter selection, iterative execution, and uncertainty quantification within a structured workflow. To demonstrate its effectiveness, Aqua-MC was applied to wheat yield modeling in Qazvin, Iran, where parameter uncertainty was assessed using 3000 Monte Carlo simulations. The DYNIA (Dynamic Identifiability Analysis) method was employed to evaluate the time-dependent sensitivity of 47 model parameters, providing insights into the temporal evolution of parameter influence. The results revealed that soil evaporation and yield predictions exhibited the highest uncertainty, while transpiration and biomass outputs were more stable. The study also highlighted that many parameters had low impact, suggesting that reducing the number of free parameters could enhance model efficiency. Despite its advantages, Aqua-MC has some limitations, including its computational intensity and reliance on the GLUE method, which may overestimate uncertainty bounds. To improve applicability, future research should focus on parallel computing, cloud-based execution, integration with machine learning techniques, and expanding Aqua-MC to multi-crop studies. By overcoming the limitations of closed-source models, Aqua-MC provides a scalable and efficient solution for performing large-scale uncertainty analysis in crop modeling.},
}
RevDate: 2025-07-13
CmpDate: 2025-07-10
Exploiting heart rate variability for driver drowsiness detection using wearable sensors and machine learning.
Scientific reports, 15(1):24898.
Driver drowsiness is a critical issue in transportation systems and a leading cause of traffic accidents. Common factors contributing to accidents include intoxicated driving, fatigue, and sleep deprivation. Drowsiness significantly impairs a driver's response time, awareness, and judgment. Implementing systems capable of detecting and alerting drivers to drowsiness is therefore essential for accident prevention. This paper examines the feasibility of using heart rate variability (HRV) analysis to assess driver drowsiness. It explores the physiological basis of HRV and its correlation with drowsiness. We propose a system model that integrates wearable devices equipped with photoplethysmography (PPG) sensors, transmitting data to a smartphone and then to a cloud server. Two novel algorithms are developed to segment and label features periodically, predicting drowsiness levels based on HRV derived from PPG signals. The proposed approach is evaluated using real-driving data and supervised machine learning techniques. Six classification algorithms are applied to labeled datasets, with performance metrics such as accuracy, precision, recall, F1-score, and runtime assessed to determine the most effective algorithm for timely drowsiness detection and driver alerting. Our results demonstrate that the Random Forest (RF) classifier achieves the highest testing accuracy (86.05%), precision (87.16%), recall (93.61%), and F1-score (89.02%) with the smallest mean change between training and testing datasets (-4.30%), highlighting its robustness for real-world deployment. The Support Vector Machine with Radial Basis Function (SVM-RBF) also shows strong generalization performance, with a testing F1-score of 87.15% and the smallest mean change of -3.97%. These findings suggest that HRV-based drowsiness detection systems can be effectively integrated into Advanced Driver Assistance Systems (ADAS) to enhance driver safety by providing timely alerts, thereby reducing the risk of accidents caused by drowsiness.
Additional Links: PMID-40640285
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40640285,
year = {2025},
author = {AlArnaout, Z and Zaki, C and Kotb, Y and AlAkkoumi, M and Mostafa, N},
title = {Exploiting heart rate variability for driver drowsiness detection using wearable sensors and machine learning.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {24898},
pmid = {40640285},
issn = {2045-2322},
mesh = {Humans ; *Heart Rate/physiology ; *Wearable Electronic Devices ; *Automobile Driving ; *Machine Learning ; Algorithms ; Photoplethysmography/methods ; Accidents, Traffic/prevention & control ; Male ; *Sleep Stages/physiology ; Adult ; Female ; },
abstract = {Driver drowsiness is a critical issue in transportation systems and a leading cause of traffic accidents. Common factors contributing to accidents include intoxicated driving, fatigue, and sleep deprivation. Drowsiness significantly impairs a driver's response time, awareness, and judgment. Implementing systems capable of detecting and alerting drivers to drowsiness is therefore essential for accident prevention. This paper examines the feasibility of using heart rate variability (HRV) analysis to assess driver drowsiness. It explores the physiological basis of HRV and its correlation with drowsiness. We propose a system model that integrates wearable devices equipped with photoplethysmography (PPG) sensors, transmitting data to a smartphone and then to a cloud server. Two novel algorithms are developed to segment and label features periodically, predicting drowsiness levels based on HRV derived from PPG signals. The proposed approach is evaluated using real-driving data and supervised machine learning techniques. Six classification algorithms are applied to labeled datasets, with performance metrics such as accuracy, precision, recall, F1-score, and runtime assessed to determine the most effective algorithm for timely drowsiness detection and driver alerting. Our results demonstrate that the Random Forest (RF) classifier achieves the highest testing accuracy (86.05%), precision (87.16%), recall (93.61%), and F1-score (89.02%) with the smallest mean change between training and testing datasets (-4.30%), highlighting its robustness for real-world deployment. The Support Vector Machine with Radial Basis Function (SVM-RBF) also shows strong generalization performance, with a testing F1-score of 87.15% and the smallest mean change of -3.97%. These findings suggest that HRV-based drowsiness detection systems can be effectively integrated into Advanced Driver Assistance Systems (ADAS) to enhance driver safety by providing timely alerts, thereby reducing the risk of accidents caused by drowsiness.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Heart Rate/physiology
*Wearable Electronic Devices
*Automobile Driving
*Machine Learning
Algorithms
Photoplethysmography/methods
Accidents, Traffic/prevention & control
Male
*Sleep Stages/physiology
Adult
Female
RevDate: 2025-07-12
A unified model integrating UTAUT-Behavioural intension and Object-Oriented approaches for sustainable adoption of Cloud-Based collaborative platforms in higher education.
Scientific reports, 15(1):24767.
In recent years, cloud computing (CC) services have expanded rapidly, with platforms like Google Drive, Dropbox and Apple iCloud and gaining global adoption. This study evolves a predictive model to identify the key factors that influencing Jordanian academics' behavioral intention to adopt sustainable cloud-based collaborative systems (SCBCS). By integrating Unified Theory of Acceptance and Use of Technology (UTAUT) along with system design methodologies, we put forward a comprehensive research model to improve the adoption and efficiency of SCBCS in developing countries. By using cross-sectional data from 500 professors in Jordanian higher education institutions, we adapt and extend the UTAUT model to describe behavioral intention and also assess its impact on teaching and learning processes. Both exploratory and confirmatory analyses exhibits that expanded UTAUT model significantly improves the variance explained in behavioral intention. This Study key findings reveal that behavioral control, effort expectancy and social influence significantly impact attitudes towards using cloud services and also contributes to sustainable development goals by promoting the adoption of energy-efficient and resource-optimized cloud-based platforms in higher education. The findings provide actionable insights for policymakers and educators to improve sustainable technology adoption in developing countries, ultimately improving the quality and sustainability of educational processes.
Additional Links: PMID-40634436
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40634436,
year = {2025},
author = {Feng, K and Haridas, D},
title = {A unified model integrating UTAUT-Behavioural intension and Object-Oriented approaches for sustainable adoption of Cloud-Based collaborative platforms in higher education.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {24767},
pmid = {40634436},
issn = {2045-2322},
abstract = {In recent years, cloud computing (CC) services have expanded rapidly, with platforms like Google Drive, Dropbox and Apple iCloud and gaining global adoption. This study evolves a predictive model to identify the key factors that influencing Jordanian academics' behavioral intention to adopt sustainable cloud-based collaborative systems (SCBCS). By integrating Unified Theory of Acceptance and Use of Technology (UTAUT) along with system design methodologies, we put forward a comprehensive research model to improve the adoption and efficiency of SCBCS in developing countries. By using cross-sectional data from 500 professors in Jordanian higher education institutions, we adapt and extend the UTAUT model to describe behavioral intention and also assess its impact on teaching and learning processes. Both exploratory and confirmatory analyses exhibits that expanded UTAUT model significantly improves the variance explained in behavioral intention. This Study key findings reveal that behavioral control, effort expectancy and social influence significantly impact attitudes towards using cloud services and also contributes to sustainable development goals by promoting the adoption of energy-efficient and resource-optimized cloud-based platforms in higher education. The findings provide actionable insights for policymakers and educators to improve sustainable technology adoption in developing countries, ultimately improving the quality and sustainability of educational processes.},
}
RevDate: 2025-07-11
CmpDate: 2025-07-09
Workpiece surface defect detection based on YOLOv11 and edge computing.
PloS one, 20(7):e0327546.
The rapid development of modern industry has significantly raised the demand for workpieces. To ensure the quality of workpieces, workpiece surface defect detection has become an indispensable part of industrial production. Most workpiece surface defect detection technologies rely on cloud computing. However, transmitting large volumes of data via wireless networks places substantial computational burdens on cloud servers, significantly reducing defect detection speed. Therefore, to enable efficient and precise detection, this paper proposes a workpiece surface defect detection method based on YOLOv11 and edge computing. First, the NEU-DET dataset was expanded using random flipping, cropping, and the self-attention generative adversarial network (SA-GAN). Then, the accuracy indicators of the YOLOv7-YOLOv11 models were compared on NEU-DET and validated on the Tianchi aluminium profile surface defect dataset. Finally, the cloud-based YOLOv11 model, which achieved the highest accuracy, was converted to the edge-based YOLOv11-RKNN model and deployed on the RK3568 edge device to improve the detection speed. Results indicate that YOLOv11 with SA-GAN achieved mAP@0.5 improvements of 7.7%, 3.1%, 5.9%, and 7.0% over YOLOv7, YOLOv8, YOLOv9, and YOLOv10, respectively, on the NEU-DET dataset. Moreover, YOLOv11 with SA-GAN achieved an 87.0% mAP@0.5 on the Tianchi aluminium profile surface defect dataset, outperforming the other models again. This verifies the generalisability of the YOLOv11 model. Additionally, quantising and deploying YOLOv11 on the edge device reduced its size from 10,156 kB to 4,194 kB and reduced its single-image detection time from 52.1ms to 33.6ms, which represents a significant efficiency enhancement.
Additional Links: PMID-40632737
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40632737,
year = {2025},
author = {Wang, Z and Ding, T and Liang, S and Cui, H and Gao, X},
title = {Workpiece surface defect detection based on YOLOv11 and edge computing.},
journal = {PloS one},
volume = {20},
number = {7},
pages = {e0327546},
pmid = {40632737},
issn = {1932-6203},
abstract = {The rapid development of modern industry has significantly raised the demand for workpieces. To ensure the quality of workpieces, workpiece surface defect detection has become an indispensable part of industrial production. Most workpiece surface defect detection technologies rely on cloud computing. However, transmitting large volumes of data via wireless networks places substantial computational burdens on cloud servers, significantly reducing defect detection speed. Therefore, to enable efficient and precise detection, this paper proposes a workpiece surface defect detection method based on YOLOv11 and edge computing. First, the NEU-DET dataset was expanded using random flipping, cropping, and the self-attention generative adversarial network (SA-GAN). Then, the accuracy indicators of the YOLOv7-YOLOv11 models were compared on NEU-DET and validated on the Tianchi aluminium profile surface defect dataset. Finally, the cloud-based YOLOv11 model, which achieved the highest accuracy, was converted to the edge-based YOLOv11-RKNN model and deployed on the RK3568 edge device to improve the detection speed. Results indicate that YOLOv11 with SA-GAN achieved mAP@0.5 improvements of 7.7%, 3.1%, 5.9%, and 7.0% over YOLOv7, YOLOv8, YOLOv9, and YOLOv10, respectively, on the NEU-DET dataset. Moreover, YOLOv11 with SA-GAN achieved an 87.0% mAP@0.5 on the Tianchi aluminium profile surface defect dataset, outperforming the other models again. This verifies the generalisability of the YOLOv11 model. Additionally, quantising and deploying YOLOv11 on the edge device reduced its size from 10,156 kB to 4,194 kB and reduced its single-image detection time from 52.1ms to 33.6ms, which represents a significant efficiency enhancement.},
}
RevDate: 2025-07-28
CmpDate: 2025-07-09
Mental health help-seeking behaviours of East Asian immigrants: a scoping review.
European journal of psychotraumatology, 16(1):2514327.
Background: The global immigrant population is increasing annually, and Asian immigrants have a substantial representation within the immigrant population. Due to a myriad of challenges such as acculturation, discrimination, language, and financial issues, immigrants are at high risk of mental health conditions. However, a large-scale mapping of the existing literature regarding these issues has yet to be completed.Objective: This study aimed to investigate the mental health conditions, help-seeking behaviours, and factors affecting mental health service utilization among East Asian immigrants residing in Western countries.Method: This study adopted the scoping review methodology based on the Joanna Briggs Institute framework. A comprehensive database search was conducted in May 2024 in PubMed, CINAHL, Embase, Cochrane, and Google Scholar. Search terms were developed based on participants, concept, context framework. The participants were East Asian immigrants and their families, and the concept of interest was mental health help-seeking behaviours and mental health service utilization. Regarding the context, studies targeting East Asian immigrants in Western countries were included. Data were summarized narratively and presented in a tabular and word cloud format.Results: Out of 1990 studies, 31 studies were included. East Asian immigrants often face mental health conditions, including depression, anxiety, and suicidal behaviours. They predominantly sought help from informal sources such as family, friends, religion, and complementary or alternative medicine, rather than from formal sources such as mental health clinics or healthcare professionals. Facilitators of seeking help included recognizing the need for professional help, experiencing severe symptoms, higher levels of acculturation, longer length of stay in the host country. Barriers included stigma, cultural beliefs, and language barriers.Conclusions: The review emphasizes the need for culturally tailored interventions to improve mental health outcomes in this vulnerable population. These results can guide future research and policymaking to address mental health disparities in immigrant communities.
Additional Links: PMID-40631378
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40631378,
year = {2025},
author = {Park, J and Lee, S and Park, G and Woodward, S},
title = {Mental health help-seeking behaviours of East Asian immigrants: a scoping review.},
journal = {European journal of psychotraumatology},
volume = {16},
number = {1},
pages = {2514327},
pmid = {40631378},
issn = {2000-8066},
mesh = {Humans ; *Emigrants and Immigrants/psychology/statistics & numerical data ; *Help-Seeking Behavior ; *Mental Health Services/statistics & numerical data ; *Patient Acceptance of Health Care/ethnology ; Asia, Eastern/ethnology ; *Mental Disorders/therapy/ethnology ; Acculturation ; Mental Health/ethnology ; East Asian People ; },
abstract = {Background: The global immigrant population is increasing annually, and Asian immigrants have a substantial representation within the immigrant population. Due to a myriad of challenges such as acculturation, discrimination, language, and financial issues, immigrants are at high risk of mental health conditions. However, a large-scale mapping of the existing literature regarding these issues has yet to be completed.Objective: This study aimed to investigate the mental health conditions, help-seeking behaviours, and factors affecting mental health service utilization among East Asian immigrants residing in Western countries.Method: This study adopted the scoping review methodology based on the Joanna Briggs Institute framework. A comprehensive database search was conducted in May 2024 in PubMed, CINAHL, Embase, Cochrane, and Google Scholar. Search terms were developed based on participants, concept, context framework. The participants were East Asian immigrants and their families, and the concept of interest was mental health help-seeking behaviours and mental health service utilization. Regarding the context, studies targeting East Asian immigrants in Western countries were included. Data were summarized narratively and presented in a tabular and word cloud format.Results: Out of 1990 studies, 31 studies were included. East Asian immigrants often face mental health conditions, including depression, anxiety, and suicidal behaviours. They predominantly sought help from informal sources such as family, friends, religion, and complementary or alternative medicine, rather than from formal sources such as mental health clinics or healthcare professionals. Facilitators of seeking help included recognizing the need for professional help, experiencing severe symptoms, higher levels of acculturation, longer length of stay in the host country. Barriers included stigma, cultural beliefs, and language barriers.Conclusions: The review emphasizes the need for culturally tailored interventions to improve mental health outcomes in this vulnerable population. These results can guide future research and policymaking to address mental health disparities in immigrant communities.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Emigrants and Immigrants/psychology/statistics & numerical data
*Help-Seeking Behavior
*Mental Health Services/statistics & numerical data
*Patient Acceptance of Health Care/ethnology
Asia, Eastern/ethnology
*Mental Disorders/therapy/ethnology
Acculturation
Mental Health/ethnology
East Asian People
RevDate: 2025-08-11
A 4×256 Gbps silicon transmitter with on-chip adaptive dispersion compensation.
Nature communications, 16(1):6268.
The exponential growth of data traffic propelled by cloud computing and artificial intelligence necessitates advanced optical interconnect solutions. While wavelength division multiplexing (WDM) enhances optical module transmission capacity, chromatic dispersion becomes a critical limitation as single-lane rates exceed 200 Gbps. Here we demonstrate a 4-channel silicon transmitter achieving 1 Tbps aggregate data rate through integrated adaptive dispersion compensation. This transmitter utilizes Mach-Zehnder modulators with adjustable input intensity splitting ratios, enabling precise control over the chirp magnitude and sign to counteract specific dispersion. At 1271 nm (-3.99 ps/nm/km), the proposed transmitter enabled 4 × 256 Gbps transmission over 5 km fiber, achieving bit error ratio below both the soft-decision forward-error correction threshold with feed-forward equalization (FFE) alone and the hard-decision forward-error correction threshold when combining FFE with maximum-likelihood sequence detection. Our results highlight a significant leap towards scalable, energy-efficient, and high-capacity optical interconnects, underscoring its potential in future local area network WDM applications.
Additional Links: PMID-40624056
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40624056,
year = {2025},
author = {Ran, S and Guo, Y and Liu, Y and Miao, T and Wu, Y and Qin, Y and Guo, Y and Lu, L and Zhu, Y and Li, Y and Zhuge, Q and Chen, J and Zhou, L},
title = {A 4×256 Gbps silicon transmitter with on-chip adaptive dispersion compensation.},
journal = {Nature communications},
volume = {16},
number = {1},
pages = {6268},
pmid = {40624056},
issn = {2041-1723},
support = {62305212//National Natural Science Foundation of China (National Science Foundation of China)/ ; },
abstract = {The exponential growth of data traffic propelled by cloud computing and artificial intelligence necessitates advanced optical interconnect solutions. While wavelength division multiplexing (WDM) enhances optical module transmission capacity, chromatic dispersion becomes a critical limitation as single-lane rates exceed 200 Gbps. Here we demonstrate a 4-channel silicon transmitter achieving 1 Tbps aggregate data rate through integrated adaptive dispersion compensation. This transmitter utilizes Mach-Zehnder modulators with adjustable input intensity splitting ratios, enabling precise control over the chirp magnitude and sign to counteract specific dispersion. At 1271 nm (-3.99 ps/nm/km), the proposed transmitter enabled 4 × 256 Gbps transmission over 5 km fiber, achieving bit error ratio below both the soft-decision forward-error correction threshold with feed-forward equalization (FFE) alone and the hard-decision forward-error correction threshold when combining FFE with maximum-likelihood sequence detection. Our results highlight a significant leap towards scalable, energy-efficient, and high-capacity optical interconnects, underscoring its potential in future local area network WDM applications.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.