Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 06 Jun 2025 at 01:41 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2025-06-05
A lightweight scalable hybrid authentication framework for Internet of Medical Things (IoMT) using blockchain hyperledger consortium network with edge computing.
Scientific reports, 15(1):19856.
The Internet of Things (IoMT) has revolutionized the global landscape by enabling the hierarchy of interconnectivity between medical devices, sensors, and healthcare applications. Significant limitations in terms of scalability, privacy, and security are associated with this connection. This study presents a scalable, lightweight hybrid authentication system that integrates blockchain and edge computing within a Hyperledger Consortium network to address such real-time problems, particularly the use of Hyperledger Indy. For secure authentication, Hyperledger ensures a permissioned, decentralized, and impenetrable environment, while edge computing lowers latency by processing data closer to IoMT devices. The proposed framework balances security and computing performance by utilizing a hybrid cryptographic technique, like NuCypher Threshold Proxy Re-Encryption. Applicational activities are now appropriate for IoMT devices with limited resources thanks to this integration. By facilitating cooperation between numerous stakeholders with restricted access, the consortium network improves scalability and data governance. Comparing the proposed framework to the state-of-the-art techniques, experimental evaluation shows that it reduces latency by 2.93% and increases authentication efficiency by 98.33%. Therefore, in contrast to current solutions, guarantee data integrity and transparency between patients, consultants, and hospitals. The development of dependable, scalable, and secure IoMT applications is facilitated by this work, enabling next-generation medical applications.
Additional Links: PMID-40473928
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40473928,
year = {2025},
author = {Khan, AA and Laghari, AA and Alroobaea, R and Baqasah, AM and Alsafyani, M and Alsufyani, H and Ullah, S},
title = {A lightweight scalable hybrid authentication framework for Internet of Medical Things (IoMT) using blockchain hyperledger consortium network with edge computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {19856},
pmid = {40473928},
issn = {2045-2322},
abstract = {The Internet of Things (IoMT) has revolutionized the global landscape by enabling the hierarchy of interconnectivity between medical devices, sensors, and healthcare applications. Significant limitations in terms of scalability, privacy, and security are associated with this connection. This study presents a scalable, lightweight hybrid authentication system that integrates blockchain and edge computing within a Hyperledger Consortium network to address such real-time problems, particularly the use of Hyperledger Indy. For secure authentication, Hyperledger ensures a permissioned, decentralized, and impenetrable environment, while edge computing lowers latency by processing data closer to IoMT devices. The proposed framework balances security and computing performance by utilizing a hybrid cryptographic technique, like NuCypher Threshold Proxy Re-Encryption. Applicational activities are now appropriate for IoMT devices with limited resources thanks to this integration. By facilitating cooperation between numerous stakeholders with restricted access, the consortium network improves scalability and data governance. Comparing the proposed framework to the state-of-the-art techniques, experimental evaluation shows that it reduces latency by 2.93% and increases authentication efficiency by 98.33%. Therefore, in contrast to current solutions, guarantee data integrity and transparency between patients, consultants, and hospitals. The development of dependable, scalable, and secure IoMT applications is facilitated by this work, enabling next-generation medical applications.},
}
RevDate: 2025-06-04
Molecular biology in the exabyte era: Taming the data deluge for biological revelation and clinical transformation.
Computational biology and chemistry, 119:108535 pii:S1476-9271(25)00195-1 [Epub ahead of print].
The explosive growth in next-generation high-throughput technologies has driven modern molecular biology into the exabyte era, producing an unparalleled volume of biological data across genomics, proteomics, metabolomics, and biomedical imaging. Although this massive expansion of data can power future biological discoveries and precision medicine, it presents considerable challenges, including computational bottlenecks, fragmented data landscapes, and ethical issues related to privacy and accessibility. We highlight novel contributions, such as the application of blockchain technologies to ensure data integrity and traceability, a relatively underexplored solution in this context. We describe how artificial intelligence (AI), machine learning (ML), and cloud computing fundamentally reshape and provide scalable solutions for these challenges by enabling near real-time pattern recognition, predictive modelling, and integrated data analysis. In particular, the use of federated learning models allows privacy-preserving collaboration across institutions. We emphasise the importance of open science, FAIR principles (Findable, Accessible, Interoperable, and Reusable), and blockchain-based audit trails to enhance global collaboration, reproducibility, and data security. By processing multi-omics datasets in integrated formats, we can enhance our understanding of disease mechanisms, facilitate biomarker discovery, and develop AI-assisted, personalised therapeutics. Addressing these technical and ethical demands requires robust governance frameworks that protect sensitive data without hindering innovation. This paper underscores a shift toward more secure, transparent, and collaborative biomedical research, marking a decisive step toward clinical transformation.
Additional Links: PMID-40466336
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40466336,
year = {2025},
author = {Zafar, I and Unar, A and Khan, NU and Abalkhail, A and Jamal, A},
title = {Molecular biology in the exabyte era: Taming the data deluge for biological revelation and clinical transformation.},
journal = {Computational biology and chemistry},
volume = {119},
number = {},
pages = {108535},
doi = {10.1016/j.compbiolchem.2025.108535},
pmid = {40466336},
issn = {1476-928X},
abstract = {The explosive growth in next-generation high-throughput technologies has driven modern molecular biology into the exabyte era, producing an unparalleled volume of biological data across genomics, proteomics, metabolomics, and biomedical imaging. Although this massive expansion of data can power future biological discoveries and precision medicine, it presents considerable challenges, including computational bottlenecks, fragmented data landscapes, and ethical issues related to privacy and accessibility. We highlight novel contributions, such as the application of blockchain technologies to ensure data integrity and traceability, a relatively underexplored solution in this context. We describe how artificial intelligence (AI), machine learning (ML), and cloud computing fundamentally reshape and provide scalable solutions for these challenges by enabling near real-time pattern recognition, predictive modelling, and integrated data analysis. In particular, the use of federated learning models allows privacy-preserving collaboration across institutions. We emphasise the importance of open science, FAIR principles (Findable, Accessible, Interoperable, and Reusable), and blockchain-based audit trails to enhance global collaboration, reproducibility, and data security. By processing multi-omics datasets in integrated formats, we can enhance our understanding of disease mechanisms, facilitate biomarker discovery, and develop AI-assisted, personalised therapeutics. Addressing these technical and ethical demands requires robust governance frameworks that protect sensitive data without hindering innovation. This paper underscores a shift toward more secure, transparent, and collaborative biomedical research, marking a decisive step toward clinical transformation.},
}
RevDate: 2025-06-04
CmpDate: 2025-06-04
Security importance of edge-IoT ecosystem: An ECC-based authentication scheme.
PloS one, 20(6):e0322131 pii:PONE-D-25-04096.
Despite the many outstanding benefits of cloud computing, such as flexibility, accessibility, efficiency, and cost savings, it still suffers from potential data loss, security concerns, limited control, and availability issues. The experts introduced the edge computing paradigm to perform better than cloud computing for the mentioned issues and challenges because it is directly connected to the Internet-of-Things (IoT), sensors, and wearables in a decentralized manner to distribute processing power closer to the data source, rather than relying on a central cloud server to handle all computations; this allows for faster data processing and reduced latency by processing data locally at the 'edge' of the network where it's generated. However, due to the resource-constrained nature of IoT, sensors, or wearable devices, the edge computing paradigm endured numerous data breaches due to sensitive data proximity, physical tampering vulnerabilities, and privacy concerns related to user-near data collection, and challenges in managing security across a large number of edge devices. Existing authentication schemes didn't fulfill the security needs of the edge computing paradigm; they either have design flaws, are susceptible to various known threats-such as impersonation, insider attacks, denial of service (DoS), and replay attacks-or experience inadequate performance due to reliance on resource-intensive cryptographic algorithms, like modular exponentiations. Given the pressing need for robust security mechanisms in such a dynamic and vulnerable edge-IoT ecosystem, this article proposes an ECC-based robust authentication scheme for such a resource-constrained IoT to address all known vulnerabilities and counter each identified threat. The proof of correctness of the proposed protocol has been scrutinized through a well-known and widely used Real-Or-Random (RoR) model, ProVerif validation, and attacks' discussion, demonstrating the thoroughness of the proposed protocol. The performance metrics have been measured by considering computational time complexity, communication cost, and storage overheads, further reinforcing the confidence in the proposed solution. The comparative analysis results demonstrated that the proposed ECC-based authentication protocol is 90.05% better in terms of computation cost, 62.41% communication cost, and consumes 67.42% less energy compared to state-of-the-art schemes. Therefore, the proposed protocol can be recommended for practical implementation in the real-world edge-IoT ecosystem.
Additional Links: PMID-40465630
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40465630,
year = {2025},
author = {Alzahrani, N},
title = {Security importance of edge-IoT ecosystem: An ECC-based authentication scheme.},
journal = {PloS one},
volume = {20},
number = {6},
pages = {e0322131},
doi = {10.1371/journal.pone.0322131},
pmid = {40465630},
issn = {1932-6203},
mesh = {*Computer Security ; *Internet of Things ; *Cloud Computing ; Humans ; Algorithms ; },
abstract = {Despite the many outstanding benefits of cloud computing, such as flexibility, accessibility, efficiency, and cost savings, it still suffers from potential data loss, security concerns, limited control, and availability issues. The experts introduced the edge computing paradigm to perform better than cloud computing for the mentioned issues and challenges because it is directly connected to the Internet-of-Things (IoT), sensors, and wearables in a decentralized manner to distribute processing power closer to the data source, rather than relying on a central cloud server to handle all computations; this allows for faster data processing and reduced latency by processing data locally at the 'edge' of the network where it's generated. However, due to the resource-constrained nature of IoT, sensors, or wearable devices, the edge computing paradigm endured numerous data breaches due to sensitive data proximity, physical tampering vulnerabilities, and privacy concerns related to user-near data collection, and challenges in managing security across a large number of edge devices. Existing authentication schemes didn't fulfill the security needs of the edge computing paradigm; they either have design flaws, are susceptible to various known threats-such as impersonation, insider attacks, denial of service (DoS), and replay attacks-or experience inadequate performance due to reliance on resource-intensive cryptographic algorithms, like modular exponentiations. Given the pressing need for robust security mechanisms in such a dynamic and vulnerable edge-IoT ecosystem, this article proposes an ECC-based robust authentication scheme for such a resource-constrained IoT to address all known vulnerabilities and counter each identified threat. The proof of correctness of the proposed protocol has been scrutinized through a well-known and widely used Real-Or-Random (RoR) model, ProVerif validation, and attacks' discussion, demonstrating the thoroughness of the proposed protocol. The performance metrics have been measured by considering computational time complexity, communication cost, and storage overheads, further reinforcing the confidence in the proposed solution. The comparative analysis results demonstrated that the proposed ECC-based authentication protocol is 90.05% better in terms of computation cost, 62.41% communication cost, and consumes 67.42% less energy compared to state-of-the-art schemes. Therefore, the proposed protocol can be recommended for practical implementation in the real-world edge-IoT ecosystem.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Computer Security
*Internet of Things
*Cloud Computing
Humans
Algorithms
RevDate: 2025-06-03
Fuzzy clustering based scheduling algorithm for minimizing the tasks completion time in cloud computing environment.
Scientific reports, 15(1):19505.
This paper explores the complexity of project planning in a cloud computing environment and recognizes the challenges associated with distributed resources, heterogeneity, and dynamic changes in workloads. This research introduces a fresh approach to planning cloud resources more effectively by utilizing fuzzy waterfall techniques. The goal is to make better use of resources while cutting down on scheduling costs. By categorizing resources based on their characteristics, this method aims to lower search costs during project planning and speed up the resource selection process. The paper presents the Budget and Time Constrained Heterogeneous Early Completion (BDHEFT) technique, which is an enhanced version of HEFT tailored to meet specific user requirements, such as budget constraints and execution timelines. With its focus on fuzzy resource allocation that considers task composition and priority, BDHEFT streamlines the project schedule, ultimately reducing both execution time and costs. The algorithm design and mathematical modeling discussed in this study lay a strong foundation for boosting task scheduling efficiency in cloud computing environments, which provides a broad perspective to improve the overall system performance and meet user quality requirements.
Additional Links: PMID-40461541
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40461541,
year = {2025},
author = {Alharbe, NR},
title = {Fuzzy clustering based scheduling algorithm for minimizing the tasks completion time in cloud computing environment.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {19505},
pmid = {40461541},
issn = {2045-2322},
abstract = {This paper explores the complexity of project planning in a cloud computing environment and recognizes the challenges associated with distributed resources, heterogeneity, and dynamic changes in workloads. This research introduces a fresh approach to planning cloud resources more effectively by utilizing fuzzy waterfall techniques. The goal is to make better use of resources while cutting down on scheduling costs. By categorizing resources based on their characteristics, this method aims to lower search costs during project planning and speed up the resource selection process. The paper presents the Budget and Time Constrained Heterogeneous Early Completion (BDHEFT) technique, which is an enhanced version of HEFT tailored to meet specific user requirements, such as budget constraints and execution timelines. With its focus on fuzzy resource allocation that considers task composition and priority, BDHEFT streamlines the project schedule, ultimately reducing both execution time and costs. The algorithm design and mathematical modeling discussed in this study lay a strong foundation for boosting task scheduling efficiency in cloud computing environments, which provides a broad perspective to improve the overall system performance and meet user quality requirements.},
}
RevDate: 2025-06-03
Double-edged sword? Heterogeneous effects of digital technology on environmental regulation-driven green transformation.
Journal of environmental management, 389:125960 pii:S0301-4797(25)01936-X [Epub ahead of print].
In the context of China's dual carbon goal, enterprises' green transformation is a key path to advancing the nation's high-quality economic development. A majority of existing studies have regarded digital technology as a homogeneous variable, and the heterogeneous impact of various technologies have not been sufficiently explored. Therefore, based on Chinese enterprises' data from 2012 to 2022, this study systematically examines the influence of environmental regulations (ETS) on enterprises' green transformation (GT) from the perspective of digital empowerment by employing difference-in-differences and threshold regression models. The findings reveal that digital transformation (DT) enhances the influence of environmental regulation by strengthening cost and innovation compensation effects. Further analysis indicates that different digital technologies have significant double-edged sword characteristics, wherein artificial intelligence negatively regulates both mechanisms, reflecting a lack of technological adaptability; cloud computing significantly enhances the positive impact of environmental regulation, reflecting its technological maturity; and big data technologies only positively regulate the innovation compensation effect, reflecting the enterprises' application preference. In addition, the combination of digital technologies does not create synergies, indicating firms' challenges in terms of absorptive capacity and organizational change. This study expands the theoretical research on environmental regulation and green transformation and provides a valuable reference for the government to develop targeted policies and enterprises to optimize the path of green transformation.
Additional Links: PMID-40460746
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40460746,
year = {2025},
author = {Shi, X and Geng, S},
title = {Double-edged sword? Heterogeneous effects of digital technology on environmental regulation-driven green transformation.},
journal = {Journal of environmental management},
volume = {389},
number = {},
pages = {125960},
doi = {10.1016/j.jenvman.2025.125960},
pmid = {40460746},
issn = {1095-8630},
abstract = {In the context of China's dual carbon goal, enterprises' green transformation is a key path to advancing the nation's high-quality economic development. A majority of existing studies have regarded digital technology as a homogeneous variable, and the heterogeneous impact of various technologies have not been sufficiently explored. Therefore, based on Chinese enterprises' data from 2012 to 2022, this study systematically examines the influence of environmental regulations (ETS) on enterprises' green transformation (GT) from the perspective of digital empowerment by employing difference-in-differences and threshold regression models. The findings reveal that digital transformation (DT) enhances the influence of environmental regulation by strengthening cost and innovation compensation effects. Further analysis indicates that different digital technologies have significant double-edged sword characteristics, wherein artificial intelligence negatively regulates both mechanisms, reflecting a lack of technological adaptability; cloud computing significantly enhances the positive impact of environmental regulation, reflecting its technological maturity; and big data technologies only positively regulate the innovation compensation effect, reflecting the enterprises' application preference. In addition, the combination of digital technologies does not create synergies, indicating firms' challenges in terms of absorptive capacity and organizational change. This study expands the theoretical research on environmental regulation and green transformation and provides a valuable reference for the government to develop targeted policies and enterprises to optimize the path of green transformation.},
}
RevDate: 2025-06-03
Advancements in Intelligent Sensing Technologies for Food Safety Detection.
Research (Washington, D.C.), 8:0713.
As a critical global public health concern, food safety has prompted substantial strategic advancements in detection technologies to safeguard human health. Integrated intelligent sensing systems, incorporating advanced information perception and computational intelligence, have emerged as rapid, user-friendly, and cost-effective solutions through the synergy of multisource sensors and smart computing. This review systematically examines the fundamental principles of intelligent sensing technologies, including optical, electrochemical, machine olfaction, and machine gustatory systems, along with their practical applications in detecting microbial, chemical, and physical hazards in food products. The review analyzes the current state and future development trends of intelligent perception from 3 core aspects: sensing technology, signal processing, and modeling algorithms. Driven by technologies such as machine learning and blockchain, intelligent sensing technology can ensure food safety throughout all stages of food processing, storage, and transportation, and provide support for the traceability and authenticity identification of food. It also presents current challenges and development trends associated with intelligent sensing technologies in food safety, including novel sensing materials, edge-cloud computing frameworks, and the co-design of energy-efficient algorithms with hardware architectures. Overall, by addressing current limitations and harnessing emerging innovations, intelligent sensing technologies are poised to establish a more resilient, transparent, and proactive framework for safeguarding food safety across global supply chains.
Additional Links: PMID-40458611
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40458611,
year = {2025},
author = {Jiang, W and Liu, C and Liu, W and Zheng, L},
title = {Advancements in Intelligent Sensing Technologies for Food Safety Detection.},
journal = {Research (Washington, D.C.)},
volume = {8},
number = {},
pages = {0713},
pmid = {40458611},
issn = {2639-5274},
abstract = {As a critical global public health concern, food safety has prompted substantial strategic advancements in detection technologies to safeguard human health. Integrated intelligent sensing systems, incorporating advanced information perception and computational intelligence, have emerged as rapid, user-friendly, and cost-effective solutions through the synergy of multisource sensors and smart computing. This review systematically examines the fundamental principles of intelligent sensing technologies, including optical, electrochemical, machine olfaction, and machine gustatory systems, along with their practical applications in detecting microbial, chemical, and physical hazards in food products. The review analyzes the current state and future development trends of intelligent perception from 3 core aspects: sensing technology, signal processing, and modeling algorithms. Driven by technologies such as machine learning and blockchain, intelligent sensing technology can ensure food safety throughout all stages of food processing, storage, and transportation, and provide support for the traceability and authenticity identification of food. It also presents current challenges and development trends associated with intelligent sensing technologies in food safety, including novel sensing materials, edge-cloud computing frameworks, and the co-design of energy-efficient algorithms with hardware architectures. Overall, by addressing current limitations and harnessing emerging innovations, intelligent sensing technologies are poised to establish a more resilient, transparent, and proactive framework for safeguarding food safety across global supply chains.},
}
RevDate: 2025-06-02
OpenPheno: an open-access, user-friendly, and smartphone-based software platform for instant plant phenotyping.
Plant methods, 21(1):76.
BACKGROUND: Plant phenotyping has become increasingly important for advancing plant science, agriculture, and biotechnology. Classic manual methods are labor-intensive and time-consuming, while existing computational tools often require advanced coding skills, high-performance hardware, or PC-based environments, making them inaccessible to non-experts, to resource-constrained users, and to field technicians.
RESULTS: To respond to these challenges, we introduce OpenPheno, an open-access, user-friendly, and smartphone-based platform encapsulated within a WeChat Mini-Program for instant plant phenotyping. The platform is designed for ease of use, enabling users to phenotype plant traits quickly and efficiently with only a smartphone at hand. We currently instantiate the use of the platform with tools such as SeedPheno, WheatHeadPheno, LeafAnglePheno, SpikeletPheno, CanopyPheno, TomatoPheno, and CornPheno; each offering specific functionalities such as seed size and count analysis, wheat head detection, leaf angle measurement, spikelet counting, canopy structure analysis, and tomato fruit measurement. In particular, OpenPheno allows developers to contribute new algorithmic tools, further expanding its capabilities to continuously facilitate the plant phenotyping community.
CONCLUSIONS: By leveraging cloud computing and a widely accessible interface, OpenPheno democratizes plant phenotyping, making advanced tools available to a broader audience, including plant scientists, breeders, and even amateurs. It can function as a role in AI-driven breeding by providing the necessary data for genotype-phenotype analysis, thereby accelerating breeding programs. Its integration with smartphones also positions OpenPheno as a powerful tool in the growing field of mobile-based agricultural technologies, paving the way for more efficient, scalable, and accessible agricultural research and breeding.
Additional Links: PMID-40457453
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40457453,
year = {2025},
author = {Hu, T and Shen, P and Zhang, Y and Zhang, J and Li, X and Xia, C and Liu, P and Lu, H and Wu, T and Han, Z},
title = {OpenPheno: an open-access, user-friendly, and smartphone-based software platform for instant plant phenotyping.},
journal = {Plant methods},
volume = {21},
number = {1},
pages = {76},
pmid = {40457453},
issn = {1746-4811},
support = {2022LZGCQY0022023TZXD004//Shandong Provincial Key Research and Development Program/ ; 2024AFB566//Hubei Provincial Natural Science Foundation of China/ ; 2024ZY-CGZY-19//Central Government's Guidance Fund for Local Science and Technology Development/ ; },
abstract = {BACKGROUND: Plant phenotyping has become increasingly important for advancing plant science, agriculture, and biotechnology. Classic manual methods are labor-intensive and time-consuming, while existing computational tools often require advanced coding skills, high-performance hardware, or PC-based environments, making them inaccessible to non-experts, to resource-constrained users, and to field technicians.
RESULTS: To respond to these challenges, we introduce OpenPheno, an open-access, user-friendly, and smartphone-based platform encapsulated within a WeChat Mini-Program for instant plant phenotyping. The platform is designed for ease of use, enabling users to phenotype plant traits quickly and efficiently with only a smartphone at hand. We currently instantiate the use of the platform with tools such as SeedPheno, WheatHeadPheno, LeafAnglePheno, SpikeletPheno, CanopyPheno, TomatoPheno, and CornPheno; each offering specific functionalities such as seed size and count analysis, wheat head detection, leaf angle measurement, spikelet counting, canopy structure analysis, and tomato fruit measurement. In particular, OpenPheno allows developers to contribute new algorithmic tools, further expanding its capabilities to continuously facilitate the plant phenotyping community.
CONCLUSIONS: By leveraging cloud computing and a widely accessible interface, OpenPheno democratizes plant phenotyping, making advanced tools available to a broader audience, including plant scientists, breeders, and even amateurs. It can function as a role in AI-driven breeding by providing the necessary data for genotype-phenotype analysis, thereby accelerating breeding programs. Its integration with smartphones also positions OpenPheno as a powerful tool in the growing field of mobile-based agricultural technologies, paving the way for more efficient, scalable, and accessible agricultural research and breeding.},
}
RevDate: 2025-06-02
A deep learning and IoT-driven framework for real-time adaptive resource allocation and grid optimization in smart energy systems.
Scientific reports, 15(1):19309.
The rapid evolution of smart grids, driven by rising global energy demand and renewable energy integration, calls for intelligent, adaptive, and energy-efficient resource allocation strategies. Traditional energy management methods, based on static models or heuristic algorithms, often fail to handle real-time grid dynamics, leading to suboptimal energy distribution, high operational costs, and significant energy wastage. To overcome these challenges, this paper presents ORA-DL (Optimized Resource Allocation using Deep Learning) an advanced framework that integrates deep learning, Internet of Things (IoT)-based sensing, and real-time adaptive control to optimize smart grid energy management. ORA-DL employs deep neural networks, reinforcement learning, and multi-agent decision-making to accurately predict energy demand, allocate resources efficiently, and enhance grid stability. The framework leverages both historical and real-time data for proactive power flow management, while IoT-enabled sensors ensure continuous monitoring and low-latency response through edge and cloud computing infrastructure. Experimental results validate the effectiveness of ORA-DL, achieving 93.38% energy demand prediction accuracy, improving grid stability to 96.25%, and reducing energy wastage to 12.96%. Furthermore, ORA-DL enhances resource distribution efficiency by 15.22% and reduces operational costs by 22.96%, significantly outperforming conventional techniques. These performance gains are driven by real-time analytics, predictive modelling, and adaptive resource modulation. By combining AI-driven decision-making, IoT sensing, and adaptive learning, ORA-DL establishes a scalable, resilient, and sustainable energy management solution. The framework also provides a foundation for future advancements, including integration with edge computing, cybersecurity measures, and reinforcement learning enhancements, marking a significant step forward in smart grid optimization.
Additional Links: PMID-40456783
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40456783,
year = {2025},
author = {Singh, AR and Sujatha, MS and Kadu, AD and Bajaj, M and Addis, HK and Sarada, K},
title = {A deep learning and IoT-driven framework for real-time adaptive resource allocation and grid optimization in smart energy systems.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {19309},
pmid = {40456783},
issn = {2045-2322},
abstract = {The rapid evolution of smart grids, driven by rising global energy demand and renewable energy integration, calls for intelligent, adaptive, and energy-efficient resource allocation strategies. Traditional energy management methods, based on static models or heuristic algorithms, often fail to handle real-time grid dynamics, leading to suboptimal energy distribution, high operational costs, and significant energy wastage. To overcome these challenges, this paper presents ORA-DL (Optimized Resource Allocation using Deep Learning) an advanced framework that integrates deep learning, Internet of Things (IoT)-based sensing, and real-time adaptive control to optimize smart grid energy management. ORA-DL employs deep neural networks, reinforcement learning, and multi-agent decision-making to accurately predict energy demand, allocate resources efficiently, and enhance grid stability. The framework leverages both historical and real-time data for proactive power flow management, while IoT-enabled sensors ensure continuous monitoring and low-latency response through edge and cloud computing infrastructure. Experimental results validate the effectiveness of ORA-DL, achieving 93.38% energy demand prediction accuracy, improving grid stability to 96.25%, and reducing energy wastage to 12.96%. Furthermore, ORA-DL enhances resource distribution efficiency by 15.22% and reduces operational costs by 22.96%, significantly outperforming conventional techniques. These performance gains are driven by real-time analytics, predictive modelling, and adaptive resource modulation. By combining AI-driven decision-making, IoT sensing, and adaptive learning, ORA-DL establishes a scalable, resilient, and sustainable energy management solution. The framework also provides a foundation for future advancements, including integration with edge computing, cybersecurity measures, and reinforcement learning enhancements, marking a significant step forward in smart grid optimization.},
}
RevDate: 2025-06-01
CmpDate: 2025-06-01
Analysis-ready VCF at Biobank scale using Zarr.
GigaScience, 14:.
BACKGROUND: Variant Call Format (VCF) is the standard file format for interchanging genetic variation data and associated quality control metrics. The usual row-wise encoding of the VCF data model (either as text or packed binary) emphasizes efficient retrieval of all data for a given variant, but accessing data on a field or sample basis is inefficient. The Biobank-scale datasets currently available consist of hundreds of thousands of whole genomes and hundreds of terabytes of compressed VCF. Row-wise data storage is fundamentally unsuitable and a more scalable approach is needed.
RESULTS: Zarr is a format for storing multidimensional data that is widely used across the sciences, and is ideally suited to massively parallel processing. We present the VCF Zarr specification, an encoding of the VCF data model using Zarr, along with fundamental software infrastructure for efficient and reliable conversion at scale. We show how this format is far more efficient than standard VCF-based approaches, and competitive with specialized methods for storing genotype data in terms of compression ratios and single-threaded calculation performance. We present case studies on subsets of 3 large human datasets (Genomics England: $n$=78,195; Our Future Health: $n$=651,050; All of Us: $n$=245,394) along with whole genome datasets for Norway Spruce ($n$=1,063) and SARS-CoV-2 ($n$=4,484,157). We demonstrate the potential for VCF Zarr to enable a new generation of high-performance and cost-effective applications via illustrative examples using cloud computing and GPUs.
CONCLUSIONS: Large row-encoded VCF files are a major bottleneck for current research, and storing and processing these files incurs a substantial cost. The VCF Zarr specification, building on widely used, open-source technologies, has the potential to greatly reduce these costs, and may enable a diverse ecosystem of next-generation tools for analysing genetic variation data directly from cloud-based object stores, while maintaining compatibility with existing file-oriented workflows.
Additional Links: PMID-40451243
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40451243,
year = {2025},
author = {Czech, E and Tyler, W and White, T and Jeffery, B and Millar, TR and Elsworth, B and Guez, J and Hancox, J and Karczewski, KJ and Miles, A and Tallman, S and Unneberg, P and Wojdyla, R and Zabad, S and Hammerbacher, J and Kelleher, J},
title = {Analysis-ready VCF at Biobank scale using Zarr.},
journal = {GigaScience},
volume = {14},
number = {},
pages = {},
doi = {10.1093/gigascience/giaf049},
pmid = {40451243},
issn = {2047-217X},
support = {HG011395//Robertson Foundation and NIH/ ; HG012473//Robertson Foundation and NIH/ ; INV-001927/GATES/Gates Foundation/United States ; //The New Zealand Institute for Plant & Food Research Ltd Kiwifruit Royalty Investment Programme/ ; //SciLifeLab & Wallenberg Data Driven Life Science Program/ ; KAW 2020.0239//Knut and Alice Wallenberg Foundation/ ; KAW 2017.0003//Knut and Alice Wallenberg Foundation/ ; //National Bioinformatics Infrastructure Sweden (NBIS)/ ; },
mesh = {Humans ; *Software ; *Biological Specimen Banks ; COVID-19/virology ; SARS-CoV-2/genetics ; *Genetic Variation ; Databases, Genetic ; Genome, Human ; },
abstract = {BACKGROUND: Variant Call Format (VCF) is the standard file format for interchanging genetic variation data and associated quality control metrics. The usual row-wise encoding of the VCF data model (either as text or packed binary) emphasizes efficient retrieval of all data for a given variant, but accessing data on a field or sample basis is inefficient. The Biobank-scale datasets currently available consist of hundreds of thousands of whole genomes and hundreds of terabytes of compressed VCF. Row-wise data storage is fundamentally unsuitable and a more scalable approach is needed.
RESULTS: Zarr is a format for storing multidimensional data that is widely used across the sciences, and is ideally suited to massively parallel processing. We present the VCF Zarr specification, an encoding of the VCF data model using Zarr, along with fundamental software infrastructure for efficient and reliable conversion at scale. We show how this format is far more efficient than standard VCF-based approaches, and competitive with specialized methods for storing genotype data in terms of compression ratios and single-threaded calculation performance. We present case studies on subsets of 3 large human datasets (Genomics England: $n$=78,195; Our Future Health: $n$=651,050; All of Us: $n$=245,394) along with whole genome datasets for Norway Spruce ($n$=1,063) and SARS-CoV-2 ($n$=4,484,157). We demonstrate the potential for VCF Zarr to enable a new generation of high-performance and cost-effective applications via illustrative examples using cloud computing and GPUs.
CONCLUSIONS: Large row-encoded VCF files are a major bottleneck for current research, and storing and processing these files incurs a substantial cost. The VCF Zarr specification, building on widely used, open-source technologies, has the potential to greatly reduce these costs, and may enable a diverse ecosystem of next-generation tools for analysing genetic variation data directly from cloud-based object stores, while maintaining compatibility with existing file-oriented workflows.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Software
*Biological Specimen Banks
COVID-19/virology
SARS-CoV-2/genetics
*Genetic Variation
Databases, Genetic
Genome, Human
RevDate: 2025-05-31
IoT-based bed and ventilator management system during the COVID-19 pandemic.
Scientific reports, 15(1):19163.
The COVID-19 outbreak put a significant pressure on limited healthcare resources. The specific number of people that may be affected in the near future is difficult to determine. We can therefore deduce that the corona virus pandemic's healthcare requirements surpassed available capacity. The Internet of Things (IoT) has emerged an crucial concept for the advancement of information and communication technology. Since IoT devices are used in various medical fields like real-time tracking, patient data management, and healthcare management. Patients can be tracked using a variety of tiny-powered and lightweight wireless sensor nodes which use the body sensor network (BSN) technology, one of the key technologies of IoT advances in healthcare. This gives clinicians and patients more options in contemporary healthcare management. This study report focuses on the conditions for vacating beds available for COVID-19 patients. The patient's health condition is recognized and categorised as positive or negative in terms of the Coronavirus disease (COVID-19) using IoT sensors. The proposed model presented in this paper uses the ARIMA model and Transformer model to train a dataset with the aim of providing enhanced prediction. The physical implementation of these models is expected to accelerate the process of patient admission and the provision of emergency services, as the predicted patient influx data will be made available to the healthcare system in advance. This predictive capability of the proposed model contributes to the efficient management of healthcare resources. The research findings indicate that the proposed models demonstrate high accuracy, as evident by its low mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE).
Additional Links: PMID-40450011
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40450011,
year = {2025},
author = {Prasad, VK and Dansana, D and Patro, SGK and Salau, AO and Yadav, D and Mishra, BK},
title = {IoT-based bed and ventilator management system during the COVID-19 pandemic.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {19163},
pmid = {40450011},
issn = {2045-2322},
abstract = {The COVID-19 outbreak put a significant pressure on limited healthcare resources. The specific number of people that may be affected in the near future is difficult to determine. We can therefore deduce that the corona virus pandemic's healthcare requirements surpassed available capacity. The Internet of Things (IoT) has emerged an crucial concept for the advancement of information and communication technology. Since IoT devices are used in various medical fields like real-time tracking, patient data management, and healthcare management. Patients can be tracked using a variety of tiny-powered and lightweight wireless sensor nodes which use the body sensor network (BSN) technology, one of the key technologies of IoT advances in healthcare. This gives clinicians and patients more options in contemporary healthcare management. This study report focuses on the conditions for vacating beds available for COVID-19 patients. The patient's health condition is recognized and categorised as positive or negative in terms of the Coronavirus disease (COVID-19) using IoT sensors. The proposed model presented in this paper uses the ARIMA model and Transformer model to train a dataset with the aim of providing enhanced prediction. The physical implementation of these models is expected to accelerate the process of patient admission and the provision of emergency services, as the predicted patient influx data will be made available to the healthcare system in advance. This predictive capability of the proposed model contributes to the efficient management of healthcare resources. The research findings indicate that the proposed models demonstrate high accuracy, as evident by its low mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE).},
}
RevDate: 2025-05-30
Exploring Public Sentiments of Psychedelics Versus Other Substances: A Reddit-Based Natural Language Processing Study.
Journal of psychoactive drugs [Epub ahead of print].
New methods that capture the public's perception of controversial topics may be valuable. This study investigates public sentiments toward psychedelics and other substances through analyzes of Reddit discussions, using Google's cloud-based Natural Language Processing (NLP) infrastructure. Our findings indicate that illicit substances such as heroin and methamphetamine are associated with highly negative general sentiments, whereas psychedelics like Psilocybin, LSD, and Ayahuasca generally evoke neutral to slightly positive sentiments. This study underscores the effectiveness and cost efficiency of NLP and machine learning models in understanding the public's perception of sensitive topics. The findings indicate that online public sentiment toward psychedelics may be growing in acceptance of their therapeutic potential. However, limitations include potential selection bias from the Reddit sample and challenges in accurately interpreting nuanced language using NLP. Future research should aim to diversify data sources and enhance NLP models to capture the full spectrum of public sentiment toward psychedelics. Our findings support the importance of ongoing research and public education to inform policy decisions and therapeutic applications of psychedelics.
Additional Links: PMID-40447287
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40447287,
year = {2025},
author = {Biba, B and O'Shea, BA},
title = {Exploring Public Sentiments of Psychedelics Versus Other Substances: A Reddit-Based Natural Language Processing Study.},
journal = {Journal of psychoactive drugs},
volume = {},
number = {},
pages = {1-11},
doi = {10.1080/02791072.2025.2511750},
pmid = {40447287},
issn = {2159-9777},
abstract = {New methods that capture the public's perception of controversial topics may be valuable. This study investigates public sentiments toward psychedelics and other substances through analyzes of Reddit discussions, using Google's cloud-based Natural Language Processing (NLP) infrastructure. Our findings indicate that illicit substances such as heroin and methamphetamine are associated with highly negative general sentiments, whereas psychedelics like Psilocybin, LSD, and Ayahuasca generally evoke neutral to slightly positive sentiments. This study underscores the effectiveness and cost efficiency of NLP and machine learning models in understanding the public's perception of sensitive topics. The findings indicate that online public sentiment toward psychedelics may be growing in acceptance of their therapeutic potential. However, limitations include potential selection bias from the Reddit sample and challenges in accurately interpreting nuanced language using NLP. Future research should aim to diversify data sources and enhance NLP models to capture the full spectrum of public sentiment toward psychedelics. Our findings support the importance of ongoing research and public education to inform policy decisions and therapeutic applications of psychedelics.},
}
RevDate: 2025-05-30
Producing Proofs of Unsatisfiability with Distributed Clause-Sharing SAT Solvers.
Journal of automated reasoning, 69(2):12.
Distributed clause-sharing SAT solvers can solve challenging problems hundreds of times faster than sequential SAT solvers by sharing derived information among multiple sequential solvers. Unlike sequential solvers, however, distributed solvers have not been able to produce proofs of unsatisfiability in a scalable manner, which limits their use in critical applications. In this work, we present a method to produce unsatisfiability proofs for distributed SAT solvers by combining the partial proofs produced by each sequential solver into a single, linear proof. We first describe a simple sequential algorithm and then present a fully distributed algorithm for proof composition, which is substantially more scalable and general than prior works. Our empirical evaluation with over 1500 solver threads shows that our distributed approach allows proof composition and checking within around 3 × its own (highly competitive) solving time.
Additional Links: PMID-40444145
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40444145,
year = {2025},
author = {Michaelson, D and Schreiber, D and Heule, MJH and Kiesl-Reiter, B and Whalen, MW},
title = {Producing Proofs of Unsatisfiability with Distributed Clause-Sharing SAT Solvers.},
journal = {Journal of automated reasoning},
volume = {69},
number = {2},
pages = {12},
doi = {10.1007/s10817-025-09725-w},
pmid = {40444145},
issn = {1573-0670},
abstract = {Distributed clause-sharing SAT solvers can solve challenging problems hundreds of times faster than sequential SAT solvers by sharing derived information among multiple sequential solvers. Unlike sequential solvers, however, distributed solvers have not been able to produce proofs of unsatisfiability in a scalable manner, which limits their use in critical applications. In this work, we present a method to produce unsatisfiability proofs for distributed SAT solvers by combining the partial proofs produced by each sequential solver into a single, linear proof. We first describe a simple sequential algorithm and then present a fully distributed algorithm for proof composition, which is substantially more scalable and general than prior works. Our empirical evaluation with over 1500 solver threads shows that our distributed approach allows proof composition and checking within around 3 × its own (highly competitive) solving time.},
}
RevDate: 2025-05-29
Fault-tolerant and mobility-aware loading via Markov chain in mobile cloud computing.
Scientific reports, 15(1):18844.
With the development of better communication networks and other related technologies, the IoT has become an integral part of modern IT. However, mobile devices' limited memory, computing power, and battery life pose significant challenges to their widespread use. As an alternate, mobile cloud computing (MCC) makes good use of cloud resources to boost mobile devices' storage and processing capabilities. This involves moving some program logic to the cloud, which improves performance and saves power. Techniques for mobility-aware offloading are necessary because device movement affects connection quality and network access. Depending on less-than-ideal mobility models, insufficient fault tolerance, inaccurate offloading, and poor task scheduling are just a few of the limitations that current mobility-aware offloading methods often face. Using fault-tolerant approaches and user mobility patterns defined by a Markov chain, this research introduces a novel decision-making framework for mobility-aware offloading. The evaluation findings show that compared to current approaches, the suggested method achieves execution speeds up to 77.35% faster and energy use down to 67.14%.
Additional Links: PMID-40442244
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40442244,
year = {2025},
author = {Wang, N and Li, Y and Li, Y and Nie, H},
title = {Fault-tolerant and mobility-aware loading via Markov chain in mobile cloud computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {18844},
pmid = {40442244},
issn = {2045-2322},
abstract = {With the development of better communication networks and other related technologies, the IoT has become an integral part of modern IT. However, mobile devices' limited memory, computing power, and battery life pose significant challenges to their widespread use. As an alternate, mobile cloud computing (MCC) makes good use of cloud resources to boost mobile devices' storage and processing capabilities. This involves moving some program logic to the cloud, which improves performance and saves power. Techniques for mobility-aware offloading are necessary because device movement affects connection quality and network access. Depending on less-than-ideal mobility models, insufficient fault tolerance, inaccurate offloading, and poor task scheduling are just a few of the limitations that current mobility-aware offloading methods often face. Using fault-tolerant approaches and user mobility patterns defined by a Markov chain, this research introduces a novel decision-making framework for mobility-aware offloading. The evaluation findings show that compared to current approaches, the suggested method achieves execution speeds up to 77.35% faster and energy use down to 67.14%.},
}
RevDate: 2025-05-29
CmpDate: 2025-05-29
TWINVAX: conceptual model of a digital twin for immunisation services in primary health care.
Frontiers in public health, 13:1568123.
INTRODUCTION: This paper presents a proposal for the modelling and reference architecture of a digital twin for immunisation services in primary health care centres. The system leverages Industry 4.0 concepts and technologies, such as the Internet of Things (IoT), machine learning, and cloud computing, to improve vaccination management and monitoring.
METHODS: The modelling was conducted using the Unified Modelling Language (UML) to define workflows and processes such as temperature monitoring of storage equipment and tracking of vaccination status. The proposed reference architecture follows the ISO 23247 standard and is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain.
RESULTS: The system enables the storage, monitoring, and visualisation of data related to the immunisation room, specifically concerning the temperature control of ice-lined refrigerators (ILRs) and thermal boxes. An analytic module has been developed to monitor vaccination coverage, correlating individual vaccination statuses with the official vaccination calendar.
DISCUSSION: The proposed digital twin improves vaccine temperature management, reduces vaccine dose wastage, monitors the population's vaccination status, and supports the planning of more effective immunisation actions. The article also discusses the feasibility, potential benefits, and future impacts of deploying this technology within immunisation services.
Additional Links: PMID-40438062
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40438062,
year = {2025},
author = {De Oliveira El-Warrak, L and Miceli de Farias, C},
title = {TWINVAX: conceptual model of a digital twin for immunisation services in primary health care.},
journal = {Frontiers in public health},
volume = {13},
number = {},
pages = {1568123},
pmid = {40438062},
issn = {2296-2565},
mesh = {Humans ; *Primary Health Care/organization & administration ; *Vaccination ; *Immunization Programs/organization & administration ; Machine Learning ; Cloud Computing ; },
abstract = {INTRODUCTION: This paper presents a proposal for the modelling and reference architecture of a digital twin for immunisation services in primary health care centres. The system leverages Industry 4.0 concepts and technologies, such as the Internet of Things (IoT), machine learning, and cloud computing, to improve vaccination management and monitoring.
METHODS: The modelling was conducted using the Unified Modelling Language (UML) to define workflows and processes such as temperature monitoring of storage equipment and tracking of vaccination status. The proposed reference architecture follows the ISO 23247 standard and is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain.
RESULTS: The system enables the storage, monitoring, and visualisation of data related to the immunisation room, specifically concerning the temperature control of ice-lined refrigerators (ILRs) and thermal boxes. An analytic module has been developed to monitor vaccination coverage, correlating individual vaccination statuses with the official vaccination calendar.
DISCUSSION: The proposed digital twin improves vaccine temperature management, reduces vaccine dose wastage, monitors the population's vaccination status, and supports the planning of more effective immunisation actions. The article also discusses the feasibility, potential benefits, and future impacts of deploying this technology within immunisation services.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Primary Health Care/organization & administration
*Vaccination
*Immunization Programs/organization & administration
Machine Learning
Cloud Computing
RevDate: 2025-05-28
Enhancing Security in CPS Industry 5.0 using Lightweight MobileNetV3 with Adaptive Optimization Technique.
Scientific reports, 15(1):18677.
Advanced Cyber-Physical Systems (CPS) that facilitate seamless communication between humans, machines, and objects are revolutionizing industrial automation as part of Industry 5.0, which is being driven by technologies such as IIoT, cloud computing, and artificial intelligence. In addition to providing flexible, individualized production processes, this growth brings with it fresh cybersecurity risks including Distributed Denial of Service (DDoS) attacks. This research suggests a deep learning-based approach designed to enhance security in CPS to address these issues. The system's primary goal is to identify and stop advanced cyberattacks. The strategy guarantees strong protection for industrial processes in a networked, intelligent environment. This study offers a sophisticated paradigm for improving Cyber-Physical Systems (CPS) security in Industry 5.0 by combining effective data preprocessing, thin edge computing, and strong encryption methods. The method starts with preprocessing the IoT23 dataset, which includes utilizing Gaussian filters to reduce noise, Mean Imputation to handle missing values, and Min-Max normalization to data scaling. The model uses flow-based, time-based, statistical, and deep feature extraction using ResNet-101 for feature extraction. Computational efficiency is maximized through the implementation of MobileNetV3, a thin convolutional neural network optimized for mobile and edge devices. The accuracy of the model is further improved by applying a Chaotic Tent-based Puma Optimization (CTPOA) technique. Finally, to ensure secure data transfer and protect private data in CPS settings, AES encryption is combined with discretionary access control. This comprehensive framework enables high performance, achieving 99.91% accuracy, and provides strong security for Industry 5.0 applications.
Additional Links: PMID-40436957
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40436957,
year = {2025},
author = {Aleisa, MA},
title = {Enhancing Security in CPS Industry 5.0 using Lightweight MobileNetV3 with Adaptive Optimization Technique.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {18677},
pmid = {40436957},
issn = {2045-2322},
support = {(R-2025-1558)//Majmaah University/ ; },
abstract = {Advanced Cyber-Physical Systems (CPS) that facilitate seamless communication between humans, machines, and objects are revolutionizing industrial automation as part of Industry 5.0, which is being driven by technologies such as IIoT, cloud computing, and artificial intelligence. In addition to providing flexible, individualized production processes, this growth brings with it fresh cybersecurity risks including Distributed Denial of Service (DDoS) attacks. This research suggests a deep learning-based approach designed to enhance security in CPS to address these issues. The system's primary goal is to identify and stop advanced cyberattacks. The strategy guarantees strong protection for industrial processes in a networked, intelligent environment. This study offers a sophisticated paradigm for improving Cyber-Physical Systems (CPS) security in Industry 5.0 by combining effective data preprocessing, thin edge computing, and strong encryption methods. The method starts with preprocessing the IoT23 dataset, which includes utilizing Gaussian filters to reduce noise, Mean Imputation to handle missing values, and Min-Max normalization to data scaling. The model uses flow-based, time-based, statistical, and deep feature extraction using ResNet-101 for feature extraction. Computational efficiency is maximized through the implementation of MobileNetV3, a thin convolutional neural network optimized for mobile and edge devices. The accuracy of the model is further improved by applying a Chaotic Tent-based Puma Optimization (CTPOA) technique. Finally, to ensure secure data transfer and protect private data in CPS settings, AES encryption is combined with discretionary access control. This comprehensive framework enables high performance, achieving 99.91% accuracy, and provides strong security for Industry 5.0 applications.},
}
RevDate: 2025-05-28
Mitigating malicious denial of wallet attack using attribute reduction with deep learning approach for serverless computing on next generation applications.
Scientific reports, 15(1):18720.
Denial of Wallet (DoW) attacks are one kind of cyberattack whose goal is to develop and expand the financial sources of a group by causing extreme costs in their serverless computing or cloud environments. These threats are chiefly related to serverless structures owing to their features, such as auto-scaling, pay-as-you-go method, cost amplification, and limited control. Serverless computing, Function-as-a-Service (FaaS), is a cloud computing (CC) system that permits developers to construct and run applications without a conventional server substructure. The deep learning (DL) model, a part of the machine learning (ML) technique, has developed as an effectual device in cybersecurity, permitting more effectual recognition of anomalous behaviour and classifying patterns indicative of threats. This study proposes a Mitigating Malicious Denial of Wallet Attack using Attribute Reduction with Deep Learning (MMDoWA-ARDL) approach for serverless computing on next-generation applications. The primary purpose of the MMDoWA-ARDL approach is to propose a novel framework that effectively detects and mitigates malicious attacks in serverless environments using an advanced deep-learning model. Initially, the presented MMDoWA-ARDL model applies data pre-processing using Z-score normalization to transform input data into a valid format. Furthermore, the feature selection process-based cuckoo search optimization (CSO) model efficiently identifies the most impactful attributes related to potential malicious activity. For the DoW attack mitigation process, the bi-directional long short-term memory multi-head self-attention network (BMNet) method is employed. Finally, the hyperparameter tuning is accomplished by implementing the secretary bird optimizer algorithm (SBOA) method to enhance the classification outcomes of the BMNet model. A wide-ranging experimental investigation uses a benchmark dataset to exhibit the superior performance of the proposed MMDoWA-ARDL technique. The comparison study of the MMDoWA-ARDL model portrayed a superior accuracy value of 99.39% over existing techniques.
Additional Links: PMID-40436925
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40436925,
year = {2025},
author = {Alkhalifa, AK and Aljebreen, M and Alanazi, R and Ahmad, N and Alahmari, S and Alrusaini, O and Alqazzaz, A and Alkhiri, H},
title = {Mitigating malicious denial of wallet attack using attribute reduction with deep learning approach for serverless computing on next generation applications.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {18720},
pmid = {40436925},
issn = {2045-2322},
abstract = {Denial of Wallet (DoW) attacks are one kind of cyberattack whose goal is to develop and expand the financial sources of a group by causing extreme costs in their serverless computing or cloud environments. These threats are chiefly related to serverless structures owing to their features, such as auto-scaling, pay-as-you-go method, cost amplification, and limited control. Serverless computing, Function-as-a-Service (FaaS), is a cloud computing (CC) system that permits developers to construct and run applications without a conventional server substructure. The deep learning (DL) model, a part of the machine learning (ML) technique, has developed as an effectual device in cybersecurity, permitting more effectual recognition of anomalous behaviour and classifying patterns indicative of threats. This study proposes a Mitigating Malicious Denial of Wallet Attack using Attribute Reduction with Deep Learning (MMDoWA-ARDL) approach for serverless computing on next-generation applications. The primary purpose of the MMDoWA-ARDL approach is to propose a novel framework that effectively detects and mitigates malicious attacks in serverless environments using an advanced deep-learning model. Initially, the presented MMDoWA-ARDL model applies data pre-processing using Z-score normalization to transform input data into a valid format. Furthermore, the feature selection process-based cuckoo search optimization (CSO) model efficiently identifies the most impactful attributes related to potential malicious activity. For the DoW attack mitigation process, the bi-directional long short-term memory multi-head self-attention network (BMNet) method is employed. Finally, the hyperparameter tuning is accomplished by implementing the secretary bird optimizer algorithm (SBOA) method to enhance the classification outcomes of the BMNet model. A wide-ranging experimental investigation uses a benchmark dataset to exhibit the superior performance of the proposed MMDoWA-ARDL technique. The comparison study of the MMDoWA-ARDL model portrayed a superior accuracy value of 99.39% over existing techniques.},
}
RevDate: 2025-05-28
Tiny Machine Learning and On-Device Inference: A Survey of Applications, Challenges, and Future Directions.
Sensors (Basel, Switzerland), 25(10): pii:s25103191.
The growth in artificial intelligence and its applications has led to increased data processing and inference requirements. Traditional cloud-based inference solutions are often used but may prove inadequate for applications requiring near-instantaneous response times. This review examines Tiny Machine Learning, also known as TinyML, as an alternative to cloud-based inference. The review focuses on applications where transmission delays make traditional Internet of Things (IoT) approaches impractical, thus necessitating a solution that uses TinyML and on-device inference. This study, which follows the PRISMA guidelines, covers TinyML's use cases for real-world applications by analyzing experimental studies and synthesizing current research on the characteristics of TinyML experiments, such as machine learning techniques and the hardware used for experiments. This review identifies existing gaps in research as well as the means to address these gaps. The review findings suggest that TinyML has a strong record of real-world usability and offers advantages over cloud-based inference, particularly in environments with bandwidth constraints and use cases that require rapid response times. This review discusses the implications of TinyML's experimental performance for future research on TinyML applications.
Additional Links: PMID-40431982
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40431982,
year = {2025},
author = {Heydari, S and Mahmoud, QH},
title = {Tiny Machine Learning and On-Device Inference: A Survey of Applications, Challenges, and Future Directions.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {10},
pages = {},
doi = {10.3390/s25103191},
pmid = {40431982},
issn = {1424-8220},
support = {2022-04487//Natural Sciences and Engineering Council of Canada/ ; },
abstract = {The growth in artificial intelligence and its applications has led to increased data processing and inference requirements. Traditional cloud-based inference solutions are often used but may prove inadequate for applications requiring near-instantaneous response times. This review examines Tiny Machine Learning, also known as TinyML, as an alternative to cloud-based inference. The review focuses on applications where transmission delays make traditional Internet of Things (IoT) approaches impractical, thus necessitating a solution that uses TinyML and on-device inference. This study, which follows the PRISMA guidelines, covers TinyML's use cases for real-world applications by analyzing experimental studies and synthesizing current research on the characteristics of TinyML experiments, such as machine learning techniques and the hardware used for experiments. This review identifies existing gaps in research as well as the means to address these gaps. The review findings suggest that TinyML has a strong record of real-world usability and offers advantages over cloud-based inference, particularly in environments with bandwidth constraints and use cases that require rapid response times. This review discusses the implications of TinyML's experimental performance for future research on TinyML applications.},
}
RevDate: 2025-05-27
Dynamic task allocation in fog computing using enhanced fuzzy logic approaches.
Scientific reports, 15(1):18513.
Fog computing extends cloud services to the edge of the network, enabling low-latency processing and improved resource utilization, which are crucial for real-time Internet of Things (IoT) applications. However, efficient task allocation remains a significant challenge due to the dynamic and heterogeneous nature of fog environments. Traditional task scheduling methods often fail to manage uncertainty in task requirements and resource availability, leading to suboptimal performance. In this paper, we propose a novel approach, DTA-FLE (Dynamic Task Allocation in Fog computing using a Fuzzy Logic Enhanced approach), which leverages fuzzy logic to handle the inherent uncertainty in task scheduling. Our method dynamically adapts to changing network conditions, optimizing task allocation to improve efficiency, reduce latency, and enhance overall system performance. Unlike conventional approaches, DTA-FLE introduces a novel hierarchical scheduling mechanism that dynamically adapts to real-time network conditions using fuzzy logic, ensuring optimal task allocation and improved system responsiveness. Through simulations using the iFogSim framework, we demonstrate that DTA-FLE outperforms conventional techniques in terms of execution time, resource utilization, and responsiveness, making it particularly suitable for real-time IoT applications within hierarchical fog-cloud architectures.
Additional Links: PMID-40425663
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40425663,
year = {2025},
author = {Jin, W and Rezaeipanah, A},
title = {Dynamic task allocation in fog computing using enhanced fuzzy logic approaches.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {18513},
pmid = {40425663},
issn = {2045-2322},
abstract = {Fog computing extends cloud services to the edge of the network, enabling low-latency processing and improved resource utilization, which are crucial for real-time Internet of Things (IoT) applications. However, efficient task allocation remains a significant challenge due to the dynamic and heterogeneous nature of fog environments. Traditional task scheduling methods often fail to manage uncertainty in task requirements and resource availability, leading to suboptimal performance. In this paper, we propose a novel approach, DTA-FLE (Dynamic Task Allocation in Fog computing using a Fuzzy Logic Enhanced approach), which leverages fuzzy logic to handle the inherent uncertainty in task scheduling. Our method dynamically adapts to changing network conditions, optimizing task allocation to improve efficiency, reduce latency, and enhance overall system performance. Unlike conventional approaches, DTA-FLE introduces a novel hierarchical scheduling mechanism that dynamically adapts to real-time network conditions using fuzzy logic, ensuring optimal task allocation and improved system responsiveness. Through simulations using the iFogSim framework, we demonstrate that DTA-FLE outperforms conventional techniques in terms of execution time, resource utilization, and responsiveness, making it particularly suitable for real-time IoT applications within hierarchical fog-cloud architectures.},
}
RevDate: 2025-05-27
Brute-force attack mitigation on remote access services via software-defined perimeter.
Scientific reports, 15(1):18599.
Remote Access Services (RAS)-including protocols such as Remote Desktop Protocol (RDP), Secure Shell (SSH), Virtual Network Computing (VNC), Telnet, File Transfer Protocol (FTP), and Secure File Transfer Protocol (SFTP)-are essential to modern network infrastructures, particularly with the rise of remote work and cloud adoption. However, their exposure significantly increases the risk of brute-force attacks (BFA), where adversaries systematically guess credentials to gain unauthorized access. Traditional defenses like IP blocklisting and multifactor authentication (MFA) often struggle with scalability and adaptability to distributed attacks. This study introduces a zero-trust-aligned Software-Defined Perimeter (SDP) architecture that integrates Single Packet Authorization (SPA) for service cloaking and Connection Tracking (ConnTrack) for real-time session analysis. A Docker-based prototype was developed and tested, demonstrating no successful BFA attempts observed, latency reduction by above 10% across all evaluated RAS protocols, and the system CPU utilization reduction by 48.7% under attack conditions without impacting normal throughput. It also proved effective against connection-oriented attacks, including port scanning and distributed denial of service (DDoS) attacks. The proposed architecture offers a scalable and efficient security framework by embedding proactive defense at the authentication layer. This work advances zero-trust implementations and delivers practical, low-overhead protection for securing RAS against evolving cyber threats.
Additional Links: PMID-40425607
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40425607,
year = {2025},
author = {Ruambo, FA and Masanga, EE and Lufyagila, B and Ateya, AA and Abd El-Latif, AA and Almousa, M and Abd-El-Atty, B},
title = {Brute-force attack mitigation on remote access services via software-defined perimeter.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {18599},
pmid = {40425607},
issn = {2045-2322},
abstract = {Remote Access Services (RAS)-including protocols such as Remote Desktop Protocol (RDP), Secure Shell (SSH), Virtual Network Computing (VNC), Telnet, File Transfer Protocol (FTP), and Secure File Transfer Protocol (SFTP)-are essential to modern network infrastructures, particularly with the rise of remote work and cloud adoption. However, their exposure significantly increases the risk of brute-force attacks (BFA), where adversaries systematically guess credentials to gain unauthorized access. Traditional defenses like IP blocklisting and multifactor authentication (MFA) often struggle with scalability and adaptability to distributed attacks. This study introduces a zero-trust-aligned Software-Defined Perimeter (SDP) architecture that integrates Single Packet Authorization (SPA) for service cloaking and Connection Tracking (ConnTrack) for real-time session analysis. A Docker-based prototype was developed and tested, demonstrating no successful BFA attempts observed, latency reduction by above 10% across all evaluated RAS protocols, and the system CPU utilization reduction by 48.7% under attack conditions without impacting normal throughput. It also proved effective against connection-oriented attacks, including port scanning and distributed denial of service (DDoS) attacks. The proposed architecture offers a scalable and efficient security framework by embedding proactive defense at the authentication layer. This work advances zero-trust implementations and delivers practical, low-overhead protection for securing RAS against evolving cyber threats.},
}
RevDate: 2025-05-26
CmpDate: 2025-05-26
OCTOPUS: Disk-based, Multiplatform, Mobile-friendly Metagenomics Classifier.
AMIA ... Annual Symposium proceedings. AMIA Symposium, 2024:798-807.
Portable genomic sequencers such as Oxford Nanopore's MinION enable real-time applications in clinical and environmental health. However, there is a bottleneck in the downstream analytics when bioinformatics pipelines are unavailable, e.g., when cloud processing is unreachable due to absence of Internet connection, or only low-end computing devices can be carried on site. Here we present a platform-friendly software for portable metagenomic analysis of Nanopore data, the Oligomer-based Classifier of Taxonomic Operational and Pan-genome Units via Singletons (OCTOPUS). OCTOPUS is written in Java, reimplements several features of the popular Kraken2 and KrakenUniq software, with original components for improving metagenomics classification on incomplete/sampled reference databases, making it ideal for running on smartphones or tablets. OCTOPUS obtains sensitivity and precision comparable to Kraken2, while dramatically decreasing (4- to 16-fold) the false positive rate, and yielding high correlation on real-word data. OCTOPUS is available along with customized databases at https://github.com/DataIntellSystLab/OCTOPUS and https://github.com/Ruiz-HCI-Lab/OctopusMobile.
Additional Links: PMID-40417475
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40417475,
year = {2024},
author = {Marini, S and Barquero, A and Wadhwani, AA and Bian, J and Ruiz, J and Boucher, C and Prosperi, M},
title = {OCTOPUS: Disk-based, Multiplatform, Mobile-friendly Metagenomics Classifier.},
journal = {AMIA ... Annual Symposium proceedings. AMIA Symposium},
volume = {2024},
number = {},
pages = {798-807},
pmid = {40417475},
issn = {1942-597X},
mesh = {*Metagenomics/methods ; *Software ; *Mobile Applications ; },
abstract = {Portable genomic sequencers such as Oxford Nanopore's MinION enable real-time applications in clinical and environmental health. However, there is a bottleneck in the downstream analytics when bioinformatics pipelines are unavailable, e.g., when cloud processing is unreachable due to absence of Internet connection, or only low-end computing devices can be carried on site. Here we present a platform-friendly software for portable metagenomic analysis of Nanopore data, the Oligomer-based Classifier of Taxonomic Operational and Pan-genome Units via Singletons (OCTOPUS). OCTOPUS is written in Java, reimplements several features of the popular Kraken2 and KrakenUniq software, with original components for improving metagenomics classification on incomplete/sampled reference databases, making it ideal for running on smartphones or tablets. OCTOPUS obtains sensitivity and precision comparable to Kraken2, while dramatically decreasing (4- to 16-fold) the false positive rate, and yielding high correlation on real-word data. OCTOPUS is available along with customized databases at https://github.com/DataIntellSystLab/OCTOPUS and https://github.com/Ruiz-HCI-Lab/OctopusMobile.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Metagenomics/methods
*Software
*Mobile Applications
RevDate: 2025-05-26
Automated multi-instance REDCap data synchronization for NIH clinical trial networks.
JAMIA open, 8(3):ooaf036.
OBJECTIVES: The main goal is to develop an automated process for connecting Research Electronic Data Capture (REDCap) instances in a clinical trial network to allow for deidentified transfer of research surveys to cloud computing data commons for discovery.
MATERIALS AND METHODS: To automate the process of consolidating data from remote clinical trial sites into 1 dataset at the coordinating/storage site, we developed a Hypertext Preprocessor script that operates in tandem with a server-side scheduling system (eg, Cron) to set up practical data extraction schedules for each remote site.
RESULTS: The REDCap Application Programming Interface (API) Connection provides a novel implementation for automated synchronization between multiple REDCap instances across a distributed clinical trial network, enabling secure and efficient data transfer between study sites and coordination centers. Additionally, the protocol checker allows for automated reporting on conforming to planned data library protocols.
DISCUSSION: Working from a shared and accepted core library of REDCap surveys was critical to the success of this implementation. This model also facilitates Institutional Review Board (IRB) approvals because the coordinating center can designate which surveys and data elements to be transferred. Hence, protected health information can be transformed or withheld depending on the permission given by the IRB at the coordinating center level. For the NIH HEAL clinical trial networks, this unified data collection works toward the goal of creating a deidentified dataset for transfer to a Gen3 data commons.
CONCLUSION: We established several simple and research-relevant tools, REDCAP API Connection and REDCAP Protocol Check, to support the emerging needs of clinical trial networks with increased data harmonization complexity.
Additional Links: PMID-40417400
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40417400,
year = {2025},
author = {Adams, MCB and Hudson, C and Chen, W and Hurley, RW and Topaloglu, U},
title = {Automated multi-instance REDCap data synchronization for NIH clinical trial networks.},
journal = {JAMIA open},
volume = {8},
number = {3},
pages = {ooaf036},
pmid = {40417400},
issn = {2574-2531},
abstract = {OBJECTIVES: The main goal is to develop an automated process for connecting Research Electronic Data Capture (REDCap) instances in a clinical trial network to allow for deidentified transfer of research surveys to cloud computing data commons for discovery.
MATERIALS AND METHODS: To automate the process of consolidating data from remote clinical trial sites into 1 dataset at the coordinating/storage site, we developed a Hypertext Preprocessor script that operates in tandem with a server-side scheduling system (eg, Cron) to set up practical data extraction schedules for each remote site.
RESULTS: The REDCap Application Programming Interface (API) Connection provides a novel implementation for automated synchronization between multiple REDCap instances across a distributed clinical trial network, enabling secure and efficient data transfer between study sites and coordination centers. Additionally, the protocol checker allows for automated reporting on conforming to planned data library protocols.
DISCUSSION: Working from a shared and accepted core library of REDCap surveys was critical to the success of this implementation. This model also facilitates Institutional Review Board (IRB) approvals because the coordinating center can designate which surveys and data elements to be transferred. Hence, protected health information can be transformed or withheld depending on the permission given by the IRB at the coordinating center level. For the NIH HEAL clinical trial networks, this unified data collection works toward the goal of creating a deidentified dataset for transfer to a Gen3 data commons.
CONCLUSION: We established several simple and research-relevant tools, REDCAP API Connection and REDCAP Protocol Check, to support the emerging needs of clinical trial networks with increased data harmonization complexity.},
}
RevDate: 2025-05-26
Innovative Artificial Intelligence System in the Children's Hospital in Japan.
JMA journal, 8(2):354-360.
The evolution of innovative artificial intelligence (AI) systems in pediatric hospitals in Japan promises benefits for patients and healthcare providers. We actively contribute to advancements in groundbreaking medical treatments by leveraging deep learning technology and using vast medical datasets. Our team of data scientists closely collaborates with departments within the hospital. Our research themes based on deep learning are wide-ranging, including acceleration of pathological diagnosis using image data, distinguishing of bacterial species, early detection of eye diseases, and prediction of genetic disorders from physical features. Furthermore, we implement Information and Communication Technology to diagnose pediatric cancer. Moreover, we predict immune responses based on genomic data and diagnose autism by quantifying behavior and communication. Our expertise extends beyond research to provide comprehensive AI development services, including data collection, annotation, high-speed computing, utilization of machine learning frameworks, design of web services, and containerization. In addition, as active members of medical AI platform collaboration partnerships, we provide unique data and analytical technologies to facilitate the development of AI development platforms. Furthermore, we address the challenges of securing medical data in the cloud to ensure compliance with stringent confidentiality standards. We will discuss AI's advancements in pediatric hospitals and their challenges.
Additional Links: PMID-40415999
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40415999,
year = {2025},
author = {Umezawa, A and Nakamura, K and Kasahara, M and Igarashi, T},
title = {Innovative Artificial Intelligence System in the Children's Hospital in Japan.},
journal = {JMA journal},
volume = {8},
number = {2},
pages = {354-360},
pmid = {40415999},
issn = {2433-3298},
abstract = {The evolution of innovative artificial intelligence (AI) systems in pediatric hospitals in Japan promises benefits for patients and healthcare providers. We actively contribute to advancements in groundbreaking medical treatments by leveraging deep learning technology and using vast medical datasets. Our team of data scientists closely collaborates with departments within the hospital. Our research themes based on deep learning are wide-ranging, including acceleration of pathological diagnosis using image data, distinguishing of bacterial species, early detection of eye diseases, and prediction of genetic disorders from physical features. Furthermore, we implement Information and Communication Technology to diagnose pediatric cancer. Moreover, we predict immune responses based on genomic data and diagnose autism by quantifying behavior and communication. Our expertise extends beyond research to provide comprehensive AI development services, including data collection, annotation, high-speed computing, utilization of machine learning frameworks, design of web services, and containerization. In addition, as active members of medical AI platform collaboration partnerships, we provide unique data and analytical technologies to facilitate the development of AI development platforms. Furthermore, we address the challenges of securing medical data in the cloud to ensure compliance with stringent confidentiality standards. We will discuss AI's advancements in pediatric hospitals and their challenges.},
}
RevDate: 2025-05-25
Federated learning-based non-intrusive load monitoring adaptive to real-world heterogeneities.
Scientific reports, 15(1):18223.
Non-intrusive load monitoring (NILM) is a key way to cost-effectively acquire appliance-level information in advanced metering infrastructure (AMI). Recently, federated learning has enabled NILM to learn from decentralized meter data while preserving privacy. However, as real-world heterogeneities in electricity consumption data, local models, and AMI facilities cannot be eliminated in advance, federated learning-based NILM (FL-NILM) may underperform or even fail. Therefore, we propose a FL-NILM method adaptive to these heterogeneities. To fully leverage diverse electricity consumption data, dynamic clustering is integrated into cloud aggregation to hierarchically mitigate the global-local bias in knowledge required for NILM. Meanwhile, adaptive model initialization is applied in local training to balance biased global knowledge with local accumulated knowledge, enhancing the learning of heterogeneous data. To further handle heterogeneous local NILM models, homogeneous proxy models are used for global-local iteration through knowledge distillation. In addition, a weighted aggregation mechanism with a cache pool is designed for adapting to asynchronous iteration caused by heterogeneous AMI facilities. Experiments on public datasets show that the proposed method outperforms existing methods in both synchronous and asynchronous settings. The proposed method's advantages in computing and communication complexity are also discussed.
Additional Links: PMID-40415054
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40415054,
year = {2025},
author = {Luo, Q and Lan, C and Yu, T and Liang, M and Xiao, W and Pan, Z},
title = {Federated learning-based non-intrusive load monitoring adaptive to real-world heterogeneities.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {18223},
pmid = {40415054},
issn = {2045-2322},
support = {U24B6010, 52207105//National Natural Science Foundation of China/ ; U24B6010, 52207105//National Natural Science Foundation of China/ ; U24B6010, 52207105//National Natural Science Foundation of China/ ; U24B6010, 52207105//National Natural Science Foundation of China/ ; U24B6010, 52207105//National Natural Science Foundation of China/ ; U24B6010, 52207105//National Natural Science Foundation of China/ ; },
abstract = {Non-intrusive load monitoring (NILM) is a key way to cost-effectively acquire appliance-level information in advanced metering infrastructure (AMI). Recently, federated learning has enabled NILM to learn from decentralized meter data while preserving privacy. However, as real-world heterogeneities in electricity consumption data, local models, and AMI facilities cannot be eliminated in advance, federated learning-based NILM (FL-NILM) may underperform or even fail. Therefore, we propose a FL-NILM method adaptive to these heterogeneities. To fully leverage diverse electricity consumption data, dynamic clustering is integrated into cloud aggregation to hierarchically mitigate the global-local bias in knowledge required for NILM. Meanwhile, adaptive model initialization is applied in local training to balance biased global knowledge with local accumulated knowledge, enhancing the learning of heterogeneous data. To further handle heterogeneous local NILM models, homogeneous proxy models are used for global-local iteration through knowledge distillation. In addition, a weighted aggregation mechanism with a cache pool is designed for adapting to asynchronous iteration caused by heterogeneous AMI facilities. Experiments on public datasets show that the proposed method outperforms existing methods in both synchronous and asynchronous settings. The proposed method's advantages in computing and communication complexity are also discussed.},
}
RevDate: 2025-05-23
CmpDate: 2025-05-23
An intelligent framework for crop health surveillance and disease management.
PloS one, 20(5):e0324347 pii:PONE-D-24-46508.
The agricultural sector faces critical challenges, including significant crop losses due to undetected plant diseases, inefficient monitoring systems, and delays in disease management, all of which threaten food security worldwide. Traditional approaches to disease detection are often labor-intensive, time-consuming, and prone to errors, making early intervention difficult. This paper proposes an intelligent framework for automated crop health monitoring and early disease detection to overcome these limitations. The system leverages deep learning, cloud computing, embedded devices, and the Internet of Things (IoT) to provide real-time insights into plant health over large agricultural areas. The primary goal is to enhance early detection accuracy and recommend effective disease management strategies, including crop rotation and targeted treatment. Additionally, environmental parameters such as temperature, humidity, and water levels are continuously monitored to aid in informed decision-making. The proposed framework incorporates Convolutional Neural Network (CNN), MobileNet-1, MobileNet-2, Residual Network (ResNet-50), and ResNet-50 with InceptionV3 to ensure precise disease identification and improved agricultural productivity.
Additional Links: PMID-40408612
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40408612,
year = {2025},
author = {Ayid, YM and Fouad, Y and Kaddes, M and El-Hoseny, HM},
title = {An intelligent framework for crop health surveillance and disease management.},
journal = {PloS one},
volume = {20},
number = {5},
pages = {e0324347},
doi = {10.1371/journal.pone.0324347},
pmid = {40408612},
issn = {1932-6203},
mesh = {*Crops, Agricultural/microbiology/growth & development ; *Plant Diseases/prevention & control ; Neural Networks, Computer ; Deep Learning ; Cloud Computing ; Agriculture/methods ; Humans ; Internet of Things ; },
abstract = {The agricultural sector faces critical challenges, including significant crop losses due to undetected plant diseases, inefficient monitoring systems, and delays in disease management, all of which threaten food security worldwide. Traditional approaches to disease detection are often labor-intensive, time-consuming, and prone to errors, making early intervention difficult. This paper proposes an intelligent framework for automated crop health monitoring and early disease detection to overcome these limitations. The system leverages deep learning, cloud computing, embedded devices, and the Internet of Things (IoT) to provide real-time insights into plant health over large agricultural areas. The primary goal is to enhance early detection accuracy and recommend effective disease management strategies, including crop rotation and targeted treatment. Additionally, environmental parameters such as temperature, humidity, and water levels are continuously monitored to aid in informed decision-making. The proposed framework incorporates Convolutional Neural Network (CNN), MobileNet-1, MobileNet-2, Residual Network (ResNet-50), and ResNet-50 with InceptionV3 to ensure precise disease identification and improved agricultural productivity.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Crops, Agricultural/microbiology/growth & development
*Plant Diseases/prevention & control
Neural Networks, Computer
Deep Learning
Cloud Computing
Agriculture/methods
Humans
Internet of Things
RevDate: 2025-05-23
A Joint Geometric Topological Analysis Network (JGTA-Net) for Detecting and Segmenting Intracranial Aneurysms.
IEEE transactions on bio-medical engineering, PP: [Epub ahead of print].
OBJECTIVE: The rupture of intracranial aneurysms leads to subarachnoid hemorrhage. Detecting intracranial aneurysms before rupture and stratifying their risk is critical in guiding preventive measures. Point-based aneurysm segmentation provides a plausible pathway for automatic aneurysm detection. However, challenges in existing segmentation methods motivate the proposed work.
METHODS: We propose a dual-branch network model (JGTANet) for accurately detecting aneurysms. JGTA-Net employs a hierarchical geometric feature learning framework to extract local contextual geometric information from the point cloud representing intracranial vessels. Building on this, we integrated a topological analysis module that leverages persistent homology to capture complex structural details of 3D objects, filtering out short-lived noise to enhance the overall topological invariance of the aneurysms. Moreover, we refined the segmentation output by quantitatively computing multi-scale topological features and introducing a topological loss function to preserve the correct topological relationships better. Finally, we designed a feature fusion module that integrates information extracted from different modalities and receptive fields, enabling effective multi-source information fusion.
RESULTS: Experiments conducted on the IntrA dataset demonstrated the superiority of the proposed network model, yielding state-of-the-art segmentation results (e.g., Dice and IOU are approximately 0.95 and 0.90, respectively). Our IntrA results were confirmed by testing on two independent datasets: One with comparable lengths to the IntrA dataset and the other with longer and more complex vessels.
CONCLUSIONS: The proposed JGTA-Net model outperformed other recently published methods (> 10% in DSC and IOU), showing our model's strong generalization capabilities.
SIGNIFICANCE: The proposed work can be integrated into a large deep-learning-based system for assessing brain aneurysms in the clinical workflow.
Additional Links: PMID-40408207
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40408207,
year = {2025},
author = {Zhang, X and Lyu, Z and Wang, Y and Peng, B and Jiang, J},
title = {A Joint Geometric Topological Analysis Network (JGTA-Net) for Detecting and Segmenting Intracranial Aneurysms.},
journal = {IEEE transactions on bio-medical engineering},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TBME.2025.3572837},
pmid = {40408207},
issn = {1558-2531},
abstract = {OBJECTIVE: The rupture of intracranial aneurysms leads to subarachnoid hemorrhage. Detecting intracranial aneurysms before rupture and stratifying their risk is critical in guiding preventive measures. Point-based aneurysm segmentation provides a plausible pathway for automatic aneurysm detection. However, challenges in existing segmentation methods motivate the proposed work.
METHODS: We propose a dual-branch network model (JGTANet) for accurately detecting aneurysms. JGTA-Net employs a hierarchical geometric feature learning framework to extract local contextual geometric information from the point cloud representing intracranial vessels. Building on this, we integrated a topological analysis module that leverages persistent homology to capture complex structural details of 3D objects, filtering out short-lived noise to enhance the overall topological invariance of the aneurysms. Moreover, we refined the segmentation output by quantitatively computing multi-scale topological features and introducing a topological loss function to preserve the correct topological relationships better. Finally, we designed a feature fusion module that integrates information extracted from different modalities and receptive fields, enabling effective multi-source information fusion.
RESULTS: Experiments conducted on the IntrA dataset demonstrated the superiority of the proposed network model, yielding state-of-the-art segmentation results (e.g., Dice and IOU are approximately 0.95 and 0.90, respectively). Our IntrA results were confirmed by testing on two independent datasets: One with comparable lengths to the IntrA dataset and the other with longer and more complex vessels.
CONCLUSIONS: The proposed JGTA-Net model outperformed other recently published methods (> 10% in DSC and IOU), showing our model's strong generalization capabilities.
SIGNIFICANCE: The proposed work can be integrated into a large deep-learning-based system for assessing brain aneurysms in the clinical workflow.},
}
RevDate: 2025-05-22
Exposome-Scale Investigation of Cl-/Br-Containing Chemicals Using High-Resolution Mass Spectrometry, Multistage Machine Learning, and Cloud Computing.
Analytical chemistry [Epub ahead of print].
Over 70% of organic halogens, representing chlorine- and bromine-containing disinfection byproducts (Cl-/Br-DBPs), remain unidentified after 50 years of research. This work introduces a streamlined and cloud-based exposomics workflow that integrates high-resolution mass spectrometry (HRMS) analysis, multistage machine learning, and cloud computing for efficient analysis and characterization of Cl-/Br-DBPs. In particular, the multistage machine learning structure employs progressively different heavy isotopic peaks at each layer and capture the distinct isotopic characteristics of nonhalogenated compounds and Cl-/Br-compounds at different halogenation levels. This innovative approach enables the recognition of 22 types of Cl-/Br-compounds with up to 6 Br and 8 Cl atoms. To address the data imbalance among different classes, particularly the limited number of heavily chlorinated and brominated compounds, data perturbation is performed to generate hypothetical/synthetic molecular formulas containing multiple Cl and Br atoms, facilitating data augmentation. To further benefit the environmental chemistry community with limited computational experience and hardware access, above innovations are incorporated into HalogenFinder (http://www.halogenfinder.com/), a user-friendly, web-based platform for Cl-/Br-compound characterization, with statistical analysis support via MetaboAnalyst. In the benchmarking, HalogenFinder outperformed two established tools, achieving a higher recognition rate for 277 authentic Cl-/Br-compounds and uniquely identifying the number of Cl/Br atoms. In laboratory tests of DBP mixtures, it identified 72 Cl-/Br-DBPs with proposed structures, of which eight were confirmed with chemical standards. A retrospective analysis of 2022 finished water HRMS data revealed insightful temporal trends in Cl-DBP features. These results demonstrate HalogenFinder's effectiveness in advancing Cl-/Br-compound identification for environmental science and exposomics.
Additional Links: PMID-40401576
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40401576,
year = {2025},
author = {Zhao, T and Low, B and Shen, Q and Wang, Y and Hidalgo Delgado, D and Chau, KNM and Pang, Z and Li, X and Xia, J and Li, XF and Huan, T},
title = {Exposome-Scale Investigation of Cl-/Br-Containing Chemicals Using High-Resolution Mass Spectrometry, Multistage Machine Learning, and Cloud Computing.},
journal = {Analytical chemistry},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.analchem.5c00503},
pmid = {40401576},
issn = {1520-6882},
abstract = {Over 70% of organic halogens, representing chlorine- and bromine-containing disinfection byproducts (Cl-/Br-DBPs), remain unidentified after 50 years of research. This work introduces a streamlined and cloud-based exposomics workflow that integrates high-resolution mass spectrometry (HRMS) analysis, multistage machine learning, and cloud computing for efficient analysis and characterization of Cl-/Br-DBPs. In particular, the multistage machine learning structure employs progressively different heavy isotopic peaks at each layer and capture the distinct isotopic characteristics of nonhalogenated compounds and Cl-/Br-compounds at different halogenation levels. This innovative approach enables the recognition of 22 types of Cl-/Br-compounds with up to 6 Br and 8 Cl atoms. To address the data imbalance among different classes, particularly the limited number of heavily chlorinated and brominated compounds, data perturbation is performed to generate hypothetical/synthetic molecular formulas containing multiple Cl and Br atoms, facilitating data augmentation. To further benefit the environmental chemistry community with limited computational experience and hardware access, above innovations are incorporated into HalogenFinder (http://www.halogenfinder.com/), a user-friendly, web-based platform for Cl-/Br-compound characterization, with statistical analysis support via MetaboAnalyst. In the benchmarking, HalogenFinder outperformed two established tools, achieving a higher recognition rate for 277 authentic Cl-/Br-compounds and uniquely identifying the number of Cl/Br atoms. In laboratory tests of DBP mixtures, it identified 72 Cl-/Br-DBPs with proposed structures, of which eight were confirmed with chemical standards. A retrospective analysis of 2022 finished water HRMS data revealed insightful temporal trends in Cl-DBP features. These results demonstrate HalogenFinder's effectiveness in advancing Cl-/Br-compound identification for environmental science and exposomics.},
}
RevDate: 2025-05-19
Persistence of Backdoor-Based Watermarks for Neural Networks: A Comprehensive Evaluation.
IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].
Deep neural networks (DNNs) have gained considerable traction in recent years due to the unparalleled results they gathered. However, the cost behind training such sophisticated models is resource-intensive, resulting in many to consider DNNs to be intellectual property (IP) to model owners. In this era of cloud computing, high-performance DNNs are often deployed all over the Internet so that people can access them publicly. As such, DNN watermarking schemes, especially backdoor-based watermarks, have been actively developed in recent years to preserve proprietary rights. Nonetheless, there lies much uncertainty on the robustness of existing backdoor watermark schemes, toward both adversarial attacks and unintended means such as fine-tuning neural network models. One reason for this is that no complete guarantee of robustness can be assured in the context of backdoor-based watermark. In this article, we extensively evaluate the persistence of recent backdoor-based watermarks within neural networks in the scenario of fine-tuning, and we propose/develop a novel data-driven idea to restore watermark after fine-tuning without exposing the trigger set. Our empirical results show that by solely introducing training data after fine-tuning, the watermark can be restored if model parameters do not shift dramatically during fine-tuning. Depending on the types of trigger samples used, trigger accuracy can be reinstated to up to 100%. This study further explores how the restoration process works using loss landscape visualization, as well as the idea of introducing training data in the fine-tuning stage to alleviate watermark vanishing.
Additional Links: PMID-40388282
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40388282,
year = {2025},
author = {Ngo, AT and Heng, CS and Chattopadhyay, N and Chattopadhyay, A},
title = {Persistence of Backdoor-Based Watermarks for Neural Networks: A Comprehensive Evaluation.},
journal = {IEEE transactions on neural networks and learning systems},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TNNLS.2025.3565170},
pmid = {40388282},
issn = {2162-2388},
abstract = {Deep neural networks (DNNs) have gained considerable traction in recent years due to the unparalleled results they gathered. However, the cost behind training such sophisticated models is resource-intensive, resulting in many to consider DNNs to be intellectual property (IP) to model owners. In this era of cloud computing, high-performance DNNs are often deployed all over the Internet so that people can access them publicly. As such, DNN watermarking schemes, especially backdoor-based watermarks, have been actively developed in recent years to preserve proprietary rights. Nonetheless, there lies much uncertainty on the robustness of existing backdoor watermark schemes, toward both adversarial attacks and unintended means such as fine-tuning neural network models. One reason for this is that no complete guarantee of robustness can be assured in the context of backdoor-based watermark. In this article, we extensively evaluate the persistence of recent backdoor-based watermarks within neural networks in the scenario of fine-tuning, and we propose/develop a novel data-driven idea to restore watermark after fine-tuning without exposing the trigger set. Our empirical results show that by solely introducing training data after fine-tuning, the watermark can be restored if model parameters do not shift dramatically during fine-tuning. Depending on the types of trigger samples used, trigger accuracy can be reinstated to up to 100%. This study further explores how the restoration process works using loss landscape visualization, as well as the idea of introducing training data in the fine-tuning stage to alleviate watermark vanishing.},
}
RevDate: 2025-05-19
Near-Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides.
Nano-micro letters, 17(1):261.
The rise of large-scale artificial intelligence (AI) models, such as ChatGPT, DeepSeek, and autonomous vehicle systems, has significantly advanced the boundaries of AI, enabling highly complex tasks in natural language processing, image recognition, and real-time decision-making. However, these models demand immense computational power and are often centralized, relying on cloud-based architectures with inherent limitations in latency, privacy, and energy efficiency. To address these challenges and bring AI closer to real-world applications, such as wearable health monitoring, robotics, and immersive virtual environments, innovative hardware solutions are urgently needed. This work introduces a near-sensor edge computing (NSEC) system, built on a bilayer AlN/Si waveguide platform, to provide real-time, energy-efficient AI capabilities at the edge. Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction, coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations, the system represents a transformative approach to AI hardware design. Demonstrated through multimodal gesture and gait analysis, the NSEC system achieves high classification accuracies of 96.77% for gestures and 98.31% for gaits, ultra-low latency (< 10 ns), and minimal energy consumption (< 0.34 pJ). This groundbreaking system bridges the gap between AI models and real-world applications, enabling efficient, privacy-preserving AI solutions for healthcare, robotics, and next-generation human-machine interfaces, marking a pivotal advancement in edge computing and AI deployment.
Additional Links: PMID-40387963
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40387963,
year = {2025},
author = {Ren, Z and Zhang, Z and Zhuge, Y and Xiao, Z and Xu, S and Zhou, J and Lee, C},
title = {Near-Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides.},
journal = {Nano-micro letters},
volume = {17},
number = {1},
pages = {261},
pmid = {40387963},
issn = {2150-5551},
abstract = {The rise of large-scale artificial intelligence (AI) models, such as ChatGPT, DeepSeek, and autonomous vehicle systems, has significantly advanced the boundaries of AI, enabling highly complex tasks in natural language processing, image recognition, and real-time decision-making. However, these models demand immense computational power and are often centralized, relying on cloud-based architectures with inherent limitations in latency, privacy, and energy efficiency. To address these challenges and bring AI closer to real-world applications, such as wearable health monitoring, robotics, and immersive virtual environments, innovative hardware solutions are urgently needed. This work introduces a near-sensor edge computing (NSEC) system, built on a bilayer AlN/Si waveguide platform, to provide real-time, energy-efficient AI capabilities at the edge. Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction, coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations, the system represents a transformative approach to AI hardware design. Demonstrated through multimodal gesture and gait analysis, the NSEC system achieves high classification accuracies of 96.77% for gestures and 98.31% for gaits, ultra-low latency (< 10 ns), and minimal energy consumption (< 0.34 pJ). This groundbreaking system bridges the gap between AI models and real-world applications, enabling efficient, privacy-preserving AI solutions for healthcare, robotics, and next-generation human-machine interfaces, marking a pivotal advancement in edge computing and AI deployment.},
}
RevDate: 2025-05-19
CmpDate: 2025-05-19
AD Workbench: Transforming Alzheimer's research with secure, global, and collaborative data sharing and analysis.
Alzheimer's & dementia : the journal of the Alzheimer's Association, 21(5):e70278.
INTRODUCTION: The Alzheimer's Disease Data Initiative (AD Data Initiative) is a global coalition of partners accelerating scientific discoveries in Alzheimer's disease (AD) and related dementias (ADRD) by breaking down data silos, eliminating barriers to research, and fostering collaboration among scientists studying these issues.
METHODS: The flagship product of the AD Data Initiative technical suite is AD Workbench, a secure, cloud-based environment that enables global access, analysis, and sharing of datasets, as well as interoperability with other key data platforms.
RESULTS: As of April 7, 2025, AD Workbench has 6178 registered users from 115 countries, including 886 users from 60 low- and middle-income countries. On average, more than 500 users, including over 100 new users, log in each month to discover data and conduct integrative analyses.
DISCUSSION: By prioritizing interoperability and robust security within a collaborative framework, AD Workbench is well positioned to drive advancements in AD treatments and diagnostic tools.
HIGHLIGHTS: Data sharing Interoperability Cloud-based analytics Collaborative workspace.
Additional Links: PMID-40387289
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40387289,
year = {2025},
author = {McHugh, CP and Clement, MHS and Phatak, M},
title = {AD Workbench: Transforming Alzheimer's research with secure, global, and collaborative data sharing and analysis.},
journal = {Alzheimer's & dementia : the journal of the Alzheimer's Association},
volume = {21},
number = {5},
pages = {e70278},
doi = {10.1002/alz.70278},
pmid = {40387289},
issn = {1552-5279},
mesh = {*Alzheimer Disease ; Humans ; *Information Dissemination/methods ; Cooperative Behavior ; *Biomedical Research ; *Computer Security ; Cloud Computing ; },
abstract = {INTRODUCTION: The Alzheimer's Disease Data Initiative (AD Data Initiative) is a global coalition of partners accelerating scientific discoveries in Alzheimer's disease (AD) and related dementias (ADRD) by breaking down data silos, eliminating barriers to research, and fostering collaboration among scientists studying these issues.
METHODS: The flagship product of the AD Data Initiative technical suite is AD Workbench, a secure, cloud-based environment that enables global access, analysis, and sharing of datasets, as well as interoperability with other key data platforms.
RESULTS: As of April 7, 2025, AD Workbench has 6178 registered users from 115 countries, including 886 users from 60 low- and middle-income countries. On average, more than 500 users, including over 100 new users, log in each month to discover data and conduct integrative analyses.
DISCUSSION: By prioritizing interoperability and robust security within a collaborative framework, AD Workbench is well positioned to drive advancements in AD treatments and diagnostic tools.
HIGHLIGHTS: Data sharing Interoperability Cloud-based analytics Collaborative workspace.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Alzheimer Disease
Humans
*Information Dissemination/methods
Cooperative Behavior
*Biomedical Research
*Computer Security
Cloud Computing
RevDate: 2025-05-18
CmpDate: 2025-05-18
RABiTPy: an open-source Python software for rapid, AI-powered bacterial tracking and analysis.
BMC bioinformatics, 26(1):127.
Bacterial tracking is crucial for understanding the mechanisms governing motility, chemotaxis, cell division, biofilm formation, and pathogenesis. Although modern microscopy and computing have enabled the collection of large datasets, many existing tools struggle with big data processing or with accurately detecting, segmenting, and tracking bacteria of various shapes. To address these issues, we developed RABiTPy, an open-source Python software pipeline that integrates traditional and artificial intelligence-based segmentation with tracking tools within a user-friendly framework. RABiTPy runs interactively in Jupyter notebooks and supports numerous image and video formats. Users can select from adaptive, automated thresholding, or AI-based segmentation methods, fine-tuning parameters to fit their needs. The software offers customizable parameters to enhance tracking efficiency, and its streamlined handling of large datasets offers an alternative to existing tracking software by emphasizing usability and modular integration. RABiTPy supports GPU and CPU processing as well as cloud computing. It offers comprehensive spatiotemporal analyses that includes trajectories, motile speeds, mean squared displacement, and turning angles-while providing a variety of visualization options. With its scalable and accessible platform, RABiTPy empowers researchers, even those with limited coding experience, to analyze bacterial physiology and behavior more effectively. By reducing technical barriers, this tool has the potential to accelerate discoveries in microbiology.
Additional Links: PMID-40383775
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40383775,
year = {2025},
author = {Sen, S and Vairagare, I and Gosai, J and Shrivastava, A},
title = {RABiTPy: an open-source Python software for rapid, AI-powered bacterial tracking and analysis.},
journal = {BMC bioinformatics},
volume = {26},
number = {1},
pages = {127},
pmid = {40383775},
issn = {1471-2105},
support = {R35GM147131/GM/NIGMS NIH HHS/United States ; },
mesh = {*Software ; *Artificial Intelligence ; *Image Processing, Computer-Assisted/methods ; *Bacteria ; },
abstract = {Bacterial tracking is crucial for understanding the mechanisms governing motility, chemotaxis, cell division, biofilm formation, and pathogenesis. Although modern microscopy and computing have enabled the collection of large datasets, many existing tools struggle with big data processing or with accurately detecting, segmenting, and tracking bacteria of various shapes. To address these issues, we developed RABiTPy, an open-source Python software pipeline that integrates traditional and artificial intelligence-based segmentation with tracking tools within a user-friendly framework. RABiTPy runs interactively in Jupyter notebooks and supports numerous image and video formats. Users can select from adaptive, automated thresholding, or AI-based segmentation methods, fine-tuning parameters to fit their needs. The software offers customizable parameters to enhance tracking efficiency, and its streamlined handling of large datasets offers an alternative to existing tracking software by emphasizing usability and modular integration. RABiTPy supports GPU and CPU processing as well as cloud computing. It offers comprehensive spatiotemporal analyses that includes trajectories, motile speeds, mean squared displacement, and turning angles-while providing a variety of visualization options. With its scalable and accessible platform, RABiTPy empowers researchers, even those with limited coding experience, to analyze bacterial physiology and behavior more effectively. By reducing technical barriers, this tool has the potential to accelerate discoveries in microbiology.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Software
*Artificial Intelligence
*Image Processing, Computer-Assisted/methods
*Bacteria
RevDate: 2025-05-17
CmpDate: 2025-05-17
A Sustainable Future in Digital Health: Leveraging Environmentally Friendly Architectural Tactics for Sustainable Data Processing.
Studies in health technology and informatics, 327:713-717.
The rapid growth of big data in healthcare necessitates optimising data processing to reduce its environmental impact. This paper proposes a pilot architectural framework to evaluate the sustainability of a Big Healthcare Data (BHD) system using Microservices Architecture (MSA). The goal is to enhance MSA's architectural tactics by incorporating environmentally friendly metrics into healthcare systems. This is achieved by adopting energy and carbon efficiency models, alongside exploring innovative architectural strategies. The framework, based on recent research, manipulates cloud-native system architecture by using a controller to adjust microservice deployment through real-time monitoring and modelling. This approach demonstrates how sustainability-driven metrics can be applied at different abstraction levels to estimate environmental impact from multiple perspectives.
Additional Links: PMID-40380550
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40380550,
year = {2025},
author = {Haddad, T and Kumarapeli, P and de Lusignan, S and Barman, S and Khaddaj, S},
title = {A Sustainable Future in Digital Health: Leveraging Environmentally Friendly Architectural Tactics for Sustainable Data Processing.},
journal = {Studies in health technology and informatics},
volume = {327},
number = {},
pages = {713-717},
doi = {10.3233/SHTI250441},
pmid = {40380550},
issn = {1879-8365},
mesh = {*Big Data ; Humans ; Pilot Projects ; Digital Health ; },
abstract = {The rapid growth of big data in healthcare necessitates optimising data processing to reduce its environmental impact. This paper proposes a pilot architectural framework to evaluate the sustainability of a Big Healthcare Data (BHD) system using Microservices Architecture (MSA). The goal is to enhance MSA's architectural tactics by incorporating environmentally friendly metrics into healthcare systems. This is achieved by adopting energy and carbon efficiency models, alongside exploring innovative architectural strategies. The framework, based on recent research, manipulates cloud-native system architecture by using a controller to adjust microservice deployment through real-time monitoring and modelling. This approach demonstrates how sustainability-driven metrics can be applied at different abstraction levels to estimate environmental impact from multiple perspectives.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Big Data
Humans
Pilot Projects
Digital Health
RevDate: 2025-05-15
CHEst PHysical Examination integrated with UltraSound - Phase (CHEPHEUS1). A survey of Accademia di Ecografia Toracica (AdET).
Multidisciplinary respiratory medicine, 20:.
BACKGROUND: Chest physical exam (CPE) is based on the four pillars of classical semiotics. However, CPE's sensitivity and specificity are low, and is affected by operators' skills. The aim of this work was to explore the contribution of chest ultrasound (US) to the traditional CPE.
METHODS: For this purpose, a survey was submitted to US users. They were asked to rate the usefulness of classical semiotics and chest US in evaluating each item of CPE pillars. The study was conducted and described according to the STROBE checklist. The study used the freely available online survey cloud-web application (Google Forms, Google Ireland Ltd, Mountain View, CA, USA).
RESULTS: The results showed a tendency to prefer chest US to palpation and percussion, suggesting a possible -future approach based on inspection, auscultation and palpatory ultrasound evaluation.
CONCLUSION: The results of our survey introduce, for the first time, the role of ultrasound as a pillar of physical examination. Our project CHEPHEUS has the aim to study and propose a new way of performing the physical exam in the future.
Additional Links: PMID-40372277
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40372277,
year = {2025},
author = {Radovanovic, D and Zanforlin, A and Smargiassi, A and Cinquini, S and Inchingolo, R and Tursi, F and Soldati, G and Carlucci, P},
title = {CHEst PHysical Examination integrated with UltraSound - Phase (CHEPHEUS1). A survey of Accademia di Ecografia Toracica (AdET).},
journal = {Multidisciplinary respiratory medicine},
volume = {20},
number = {},
pages = {},
doi = {10.5826/mrm.2025.1020},
pmid = {40372277},
issn = {1828-695X},
abstract = {BACKGROUND: Chest physical exam (CPE) is based on the four pillars of classical semiotics. However, CPE's sensitivity and specificity are low, and is affected by operators' skills. The aim of this work was to explore the contribution of chest ultrasound (US) to the traditional CPE.
METHODS: For this purpose, a survey was submitted to US users. They were asked to rate the usefulness of classical semiotics and chest US in evaluating each item of CPE pillars. The study was conducted and described according to the STROBE checklist. The study used the freely available online survey cloud-web application (Google Forms, Google Ireland Ltd, Mountain View, CA, USA).
RESULTS: The results showed a tendency to prefer chest US to palpation and percussion, suggesting a possible -future approach based on inspection, auscultation and palpatory ultrasound evaluation.
CONCLUSION: The results of our survey introduce, for the first time, the role of ultrasound as a pillar of physical examination. Our project CHEPHEUS has the aim to study and propose a new way of performing the physical exam in the future.},
}
RevDate: 2025-05-14
Data Privacy in Medical Informatics and Electronic Health Records: A Bibliometric Analysis.
Health care analysis : HCA : journal of health philosophy and policy [Epub ahead of print].
This study aims to evaluate scientific publications on "Medical Informatics" and "Data Privacy" using a bibliometric approach to identify research trends, the most studied topics, and the countries and institutions with the highest publication output. The search was carried out utilizing the WoS Clarivate Analytics tool across SCIE journals. Subsequently, text mining, keyword clustering, and data visualization were applied through the use of VOSviewer and Tableau Desktop software. Between 1975 and 2023, a total of 7,165 articles were published on the topic of data privacy. The number of articles has been increasing each year. The text mining and clustering analysis identified eight main clusters in the literature: (1) Mobile Health/Telemedicine/IOT, (2) Security/Encryption/Authentication, (3) Big Data/AI/Data Science, (4) Anonymization/Digital Phenotyping, (5) Genomics/Biobank, (6) Ethics, (7) Legal Issues, (8) Cloud Computing. On a country basis, the United States was identified as the most active country in this field, producing the most publications and receiving the highest number of citations. China, the United Kingdom, Canada, and Australia also emerged as significant countries. Among these clusters, "Mobile Health/Telemedicine/IOT," "Security/Encryption/Authentication," and "Cloud Computing" technologies stood out as the most prominent and extensively studied topics in the intersection of medical informatics and data privacy.
Additional Links: PMID-40366511
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40366511,
year = {2025},
author = {Gulkesen, KH and Sonuvar, ET},
title = {Data Privacy in Medical Informatics and Electronic Health Records: A Bibliometric Analysis.},
journal = {Health care analysis : HCA : journal of health philosophy and policy},
volume = {},
number = {},
pages = {},
pmid = {40366511},
issn = {1573-3394},
abstract = {This study aims to evaluate scientific publications on "Medical Informatics" and "Data Privacy" using a bibliometric approach to identify research trends, the most studied topics, and the countries and institutions with the highest publication output. The search was carried out utilizing the WoS Clarivate Analytics tool across SCIE journals. Subsequently, text mining, keyword clustering, and data visualization were applied through the use of VOSviewer and Tableau Desktop software. Between 1975 and 2023, a total of 7,165 articles were published on the topic of data privacy. The number of articles has been increasing each year. The text mining and clustering analysis identified eight main clusters in the literature: (1) Mobile Health/Telemedicine/IOT, (2) Security/Encryption/Authentication, (3) Big Data/AI/Data Science, (4) Anonymization/Digital Phenotyping, (5) Genomics/Biobank, (6) Ethics, (7) Legal Issues, (8) Cloud Computing. On a country basis, the United States was identified as the most active country in this field, producing the most publications and receiving the highest number of citations. China, the United Kingdom, Canada, and Australia also emerged as significant countries. Among these clusters, "Mobile Health/Telemedicine/IOT," "Security/Encryption/Authentication," and "Cloud Computing" technologies stood out as the most prominent and extensively studied topics in the intersection of medical informatics and data privacy.},
}
RevDate: 2025-05-14
Exploring Smartphone-Based Edge AI Inferences Using Real Testbeds.
Sensors (Basel, Switzerland), 25(9): pii:s25092875.
The increasing availability of lightweight pre-trained models and AI execution frameworks is causing edge AI to become ubiquitous. Particularly, deep learning (DL) models are being used in computer vision (CV) for performing object recognition and image classification tasks in various application domains requiring prompt inferences. Regarding edge AI task execution platforms, some approaches show a strong dependency on cloud resources to complement the computing power offered by local nodes. Other approaches distribute workload horizontally, i.e., by harnessing the power of nearby edge nodes. Many of these efforts experiment with real settings comprising SBC (Single-Board Computer)-like edge nodes only, but few of these consider nomadic hardware such as smartphones. Given the huge popularity of smartphones worldwide and the unlimited scenarios where smartphone clusters could be exploited for providing computing power, this paper sheds some light in answering the following question: Is smartphone-based edge AI a competitive approach for real-time CV inferences? To empirically answer this, we use three pre-trained DL models and eight heterogeneous edge nodes including five low/mid-end smartphones and three SBCs, and compare the performance achieved using workloads from three image stream processing scenarios. Experiments were run with the help of a toolset designed for reproducing battery-driven edge computing tests. We compared latency and energy efficiency achieved by using either several smartphone clusters testbeds or SBCs only. Additionally, for battery-driven settings, we include metrics to measure how workload execution impacts smartphone battery levels. As per the computing capability shown in our experiments, we conclude that edge AI based on smartphone clusters can help in providing valuable resources to contribute to the expansion of edge AI in application scenarios requiring real-time performance.
Additional Links: PMID-40363312
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40363312,
year = {2025},
author = {Hirsch, M and Mateos, C and Majchrzak, TA},
title = {Exploring Smartphone-Based Edge AI Inferences Using Real Testbeds.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {9},
pages = {},
doi = {10.3390/s25092875},
pmid = {40363312},
issn = {1424-8220},
support = {PIBAA-28720210101298CO//Centro Científico Tecnológico - Tandil/ ; PIP11220210100138CO//Centro Científico Tecnológico - Tandil/ ; },
abstract = {The increasing availability of lightweight pre-trained models and AI execution frameworks is causing edge AI to become ubiquitous. Particularly, deep learning (DL) models are being used in computer vision (CV) for performing object recognition and image classification tasks in various application domains requiring prompt inferences. Regarding edge AI task execution platforms, some approaches show a strong dependency on cloud resources to complement the computing power offered by local nodes. Other approaches distribute workload horizontally, i.e., by harnessing the power of nearby edge nodes. Many of these efforts experiment with real settings comprising SBC (Single-Board Computer)-like edge nodes only, but few of these consider nomadic hardware such as smartphones. Given the huge popularity of smartphones worldwide and the unlimited scenarios where smartphone clusters could be exploited for providing computing power, this paper sheds some light in answering the following question: Is smartphone-based edge AI a competitive approach for real-time CV inferences? To empirically answer this, we use three pre-trained DL models and eight heterogeneous edge nodes including five low/mid-end smartphones and three SBCs, and compare the performance achieved using workloads from three image stream processing scenarios. Experiments were run with the help of a toolset designed for reproducing battery-driven edge computing tests. We compared latency and energy efficiency achieved by using either several smartphone clusters testbeds or SBCs only. Additionally, for battery-driven settings, we include metrics to measure how workload execution impacts smartphone battery levels. As per the computing capability shown in our experiments, we conclude that edge AI based on smartphone clusters can help in providing valuable resources to contribute to the expansion of edge AI in application scenarios requiring real-time performance.},
}
RevDate: 2025-05-14
Enhanced Cloud Detection Using a Unified Multimodal Data Fusion Approach in Remote Images.
Sensors (Basel, Switzerland), 25(9): pii:s25092684.
Aiming at the complexity of network architecture design and the low computational efficiency caused by variations in the number of modalities in multimodal cloud detection tasks, this paper proposes an efficient and unified multimodal cloud detection model, M2Cloud, which can process any number of modal data. The core innovation of M2Cloud lies in its novel multimodal data fusion method. This method avoids architectural changes for new modalities, thereby significantly reducing incremental computing costs and enhancing overall efficiency. Furthermore, the designed multimodal data fusion module possesses strong generalization capabilities and can be seamlessly integrated into other network architectures in a plug-and-play manner, greatly enhancing the module's practicality and flexibility. To address the challenge of unified multimodal feature extraction, we adopt two key strategies: (1) constructing feature extraction modules with shared but independent weights for each modality to preserve the inherent features of each modality; (2) utilizing cosine similarity to adaptively learn complementary features between different modalities, thereby reducing redundant information. Experimental results demonstrate that M2Cloud achieves or even surpasses the state-of-the-art (SOTA) performance on the public multimodal datasets WHUS2-CD and WHUS2-CD+, verifying its effectiveness in the unified multimodal cloud detection task. The research presented in this paper offers new insights and technical support for the field of multimodal data fusion and cloud detection, and holds significant theoretical and practical value.
Additional Links: PMID-40363125
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40363125,
year = {2025},
author = {Mo, Y and Chen, P and Zhou, W and Chen, W},
title = {Enhanced Cloud Detection Using a Unified Multimodal Data Fusion Approach in Remote Images.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {9},
pages = {},
doi = {10.3390/s25092684},
pmid = {40363125},
issn = {1424-8220},
support = {62261038//National Natural Science Foundation of China/ ; },
abstract = {Aiming at the complexity of network architecture design and the low computational efficiency caused by variations in the number of modalities in multimodal cloud detection tasks, this paper proposes an efficient and unified multimodal cloud detection model, M2Cloud, which can process any number of modal data. The core innovation of M2Cloud lies in its novel multimodal data fusion method. This method avoids architectural changes for new modalities, thereby significantly reducing incremental computing costs and enhancing overall efficiency. Furthermore, the designed multimodal data fusion module possesses strong generalization capabilities and can be seamlessly integrated into other network architectures in a plug-and-play manner, greatly enhancing the module's practicality and flexibility. To address the challenge of unified multimodal feature extraction, we adopt two key strategies: (1) constructing feature extraction modules with shared but independent weights for each modality to preserve the inherent features of each modality; (2) utilizing cosine similarity to adaptively learn complementary features between different modalities, thereby reducing redundant information. Experimental results demonstrate that M2Cloud achieves or even surpasses the state-of-the-art (SOTA) performance on the public multimodal datasets WHUS2-CD and WHUS2-CD+, verifying its effectiveness in the unified multimodal cloud detection task. The research presented in this paper offers new insights and technical support for the field of multimodal data fusion and cloud detection, and holds significant theoretical and practical value.},
}
RevDate: 2025-05-14
Temporal Decay Loss for Adaptive Log Anomaly Detection in Cloud Environments.
Sensors (Basel, Switzerland), 25(9): pii:s25092649.
Log anomaly detection in cloud computing environments is essential for maintaining system reliability and security. While sequence modeling architectures such as LSTMs and Transformers have been widely employed to capture temporal dependencies in log messages, their effectiveness deteriorates in zero-shot transfer scenarios due to distributional shifts in log structures, terminology, and event frequencies, as well as minimal token overlap across datasets. To address these challenges, we propose an effective detection approach integrating a domain-specific pre-trained language model (PLM) fine-tuned on cybersecurity-adjacent data with a novel loss function, Loss with Decaying Factor (LDF). LDF introduces an exponential time decay mechanism into the training objective, ensuring a dynamic balance between historical context and real-time relevance. Unlike traditional sequence models that often overemphasize outdated information and impose high computational overhead, LDF constrains the training process by dynamically weighing log messages based on their temporal proximity, thereby aligning with the rapidly evolving nature of cloud computing environments. Additionally, the domain-specific PLM mitigates semantic discrepancies by improving the representation of log data across heterogeneous datasets. Extensive empirical evaluations on two supercomputing log datasets demonstrate that this approach substantially enhances cross-dataset anomaly detection performance. The main contributions of this study include: (1) the introduction of a Loss with Decaying Factor (LDF) to dynamically balance historical context with real-time relevance; and (2) the integration of a domain-specific PLM for enhancing generalization in zero-shot log anomaly detection across heterogeneous cloud environments.
Additional Links: PMID-40363089
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40363089,
year = {2025},
author = {Jilcha, LA and Kim, DH and Kwak, J},
title = {Temporal Decay Loss for Adaptive Log Anomaly Detection in Cloud Environments.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {9},
pages = {},
doi = {10.3390/s25092649},
pmid = {40363089},
issn = {1424-8220},
support = {NRF: No. 2021R1A2C2011391 and IITP: No.2024-00400302//National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT), and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT)/ ; },
abstract = {Log anomaly detection in cloud computing environments is essential for maintaining system reliability and security. While sequence modeling architectures such as LSTMs and Transformers have been widely employed to capture temporal dependencies in log messages, their effectiveness deteriorates in zero-shot transfer scenarios due to distributional shifts in log structures, terminology, and event frequencies, as well as minimal token overlap across datasets. To address these challenges, we propose an effective detection approach integrating a domain-specific pre-trained language model (PLM) fine-tuned on cybersecurity-adjacent data with a novel loss function, Loss with Decaying Factor (LDF). LDF introduces an exponential time decay mechanism into the training objective, ensuring a dynamic balance between historical context and real-time relevance. Unlike traditional sequence models that often overemphasize outdated information and impose high computational overhead, LDF constrains the training process by dynamically weighing log messages based on their temporal proximity, thereby aligning with the rapidly evolving nature of cloud computing environments. Additionally, the domain-specific PLM mitigates semantic discrepancies by improving the representation of log data across heterogeneous datasets. Extensive empirical evaluations on two supercomputing log datasets demonstrate that this approach substantially enhances cross-dataset anomaly detection performance. The main contributions of this study include: (1) the introduction of a Loss with Decaying Factor (LDF) to dynamically balance historical context with real-time relevance; and (2) the integration of a domain-specific PLM for enhancing generalization in zero-shot log anomaly detection across heterogeneous cloud environments.},
}
RevDate: 2025-05-13
CmpDate: 2025-05-14
Dual level dengue diagnosis using lightweight multilayer perceptron with XAI in fog computing environment and rule based inference.
Scientific reports, 15(1):16548.
Over the last fifty years, arboviral infections have made an unparalleled contribution to worldwide disability and morbidity. Globalization, population growth, and unplanned urbanization are the main causes. Dengue is regarded as the most significant arboviral illness among them due to its prior dominance in growth. The dengue virus is mostly transmitted to humans by Aedes mosquitoes. The human body infected with dengue virus (DenV) will experience certain adverse impacts. To keep the disease under control, some of the preventative measures implemented by different countries need to be updated. Manual diagnosis is typically employed, and the accuracy of the diagnosis is assessed based on the experience of the healthcare professionals. Because there are so many patients during an outbreak, incompetence also happens. Remote monitoring and massive data storage are required. Though cloud computing is one of the solutions, it has a significant latency, despite its potential for remote monitoring and storage. Also, the diagnosis should be made as quickly as possible. The aforementioned issue has been resolved with fog computing, which significantly lowers latency and facilitates remote diagnosis. This study especially focuses on incorporating machine learning and deep learning techniques in the fog computing environment to leverage the overall diagnostic efficiency of dengue by promoting remote diagnosis and speedy treatment. A dual-level dengue diagnosis framework has been proposed in this study. Level-1 diagnosis is based on the symptoms of the patients, which are sent from the edge layer to the fog. Level-1 diagnosis is done in the fog to manage the storage and computation issues. An optimized and normalized lightweight MLP has been proposed along with preprocessing and feature reduction techniques in this study for the Level-1 Diagnosis in the fog computing environment. Pearson Correlation coefficient has been calculated between independent and target features to aid in feature reduction. Techniques like K-fold cross-validation, batch normalization, and grid search optimization have been used for increasing the efficiency. A variety of metrics have been computed to assess the effectiveness of the model. Since the suggested model is a "black box," explainable artificial intelligence (XAI) tools such as SHAP and LIME have been used to help explain its predictions. An exceptional accuracy of 92% is attained with the small dataset using the proposed model. The fog layer sends the list of probable cases to the edge layer. Also, a precision of 100% and an F1 score of 90% have been attained using the proposed model. The list of probable cases is sent from the fog layer to the edge layer, where Level-2 Diagnosis is carried out. Level-2 diagnosis is based on the serological test report of the suspected patients of the Level-1 diagnosis. Level-2 diagnosis is done at the edge using the rule-based inference method. This study incorporates dual-level diagnosis, which is not seen in recent studies. The majority of investigations end at Level 1. However, this study minimizes incorrect treatment and fatality rates by using dual-level diagnosis and assisting in confirmation of the disease.
Additional Links: PMID-40360639
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40360639,
year = {2025},
author = {R, D and T S, PK},
title = {Dual level dengue diagnosis using lightweight multilayer perceptron with XAI in fog computing environment and rule based inference.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {16548},
pmid = {40360639},
issn = {2045-2322},
mesh = {*Dengue/diagnosis ; Humans ; *Cloud Computing ; Dengue Virus ; *Neural Networks, Computer ; Machine Learning ; Deep Learning ; Algorithms ; Multilayer Perceptrons ; },
abstract = {Over the last fifty years, arboviral infections have made an unparalleled contribution to worldwide disability and morbidity. Globalization, population growth, and unplanned urbanization are the main causes. Dengue is regarded as the most significant arboviral illness among them due to its prior dominance in growth. The dengue virus is mostly transmitted to humans by Aedes mosquitoes. The human body infected with dengue virus (DenV) will experience certain adverse impacts. To keep the disease under control, some of the preventative measures implemented by different countries need to be updated. Manual diagnosis is typically employed, and the accuracy of the diagnosis is assessed based on the experience of the healthcare professionals. Because there are so many patients during an outbreak, incompetence also happens. Remote monitoring and massive data storage are required. Though cloud computing is one of the solutions, it has a significant latency, despite its potential for remote monitoring and storage. Also, the diagnosis should be made as quickly as possible. The aforementioned issue has been resolved with fog computing, which significantly lowers latency and facilitates remote diagnosis. This study especially focuses on incorporating machine learning and deep learning techniques in the fog computing environment to leverage the overall diagnostic efficiency of dengue by promoting remote diagnosis and speedy treatment. A dual-level dengue diagnosis framework has been proposed in this study. Level-1 diagnosis is based on the symptoms of the patients, which are sent from the edge layer to the fog. Level-1 diagnosis is done in the fog to manage the storage and computation issues. An optimized and normalized lightweight MLP has been proposed along with preprocessing and feature reduction techniques in this study for the Level-1 Diagnosis in the fog computing environment. Pearson Correlation coefficient has been calculated between independent and target features to aid in feature reduction. Techniques like K-fold cross-validation, batch normalization, and grid search optimization have been used for increasing the efficiency. A variety of metrics have been computed to assess the effectiveness of the model. Since the suggested model is a "black box," explainable artificial intelligence (XAI) tools such as SHAP and LIME have been used to help explain its predictions. An exceptional accuracy of 92% is attained with the small dataset using the proposed model. The fog layer sends the list of probable cases to the edge layer. Also, a precision of 100% and an F1 score of 90% have been attained using the proposed model. The list of probable cases is sent from the fog layer to the edge layer, where Level-2 Diagnosis is carried out. Level-2 diagnosis is based on the serological test report of the suspected patients of the Level-1 diagnosis. Level-2 diagnosis is done at the edge using the rule-based inference method. This study incorporates dual-level diagnosis, which is not seen in recent studies. The majority of investigations end at Level 1. However, this study minimizes incorrect treatment and fatality rates by using dual-level diagnosis and assisting in confirmation of the disease.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Dengue/diagnosis
Humans
*Cloud Computing
Dengue Virus
*Neural Networks, Computer
Machine Learning
Deep Learning
Algorithms
Multilayer Perceptrons
RevDate: 2025-05-13
CmpDate: 2025-05-13
A novel methodological approach to SaaS churn prediction using whale optimization algorithm.
PloS one, 20(5):e0319998 pii:PONE-D-24-24312.
Customer churn is a critical concern in the Software as a Service (SaaS) sector, potentially impacting long-term growth within the cloud computing industry. The scarcity of research on customer churn models in SaaS, particularly regarding diverse feature selection methods and predictive algorithms, highlights a significant gap. Addressing this would enhance academic discourse and provide essential insights for managerial decision-making. This study introduces a novel approach to SaaS churn prediction using the Whale Optimization Algorithm (WOA) for feature selection. Results show that WOA-reduced datasets improve processing efficiency and outperform full-variable datasets in predictive performance. The study encompasses a range of prediction techniques with three distinct datasets evaluated derived from over 1,000 users of a multinational SaaS company: the WOA-reduced dataset, the full-variable dataset, and the chi-squared-derived dataset. These three datasets were examined with the most used in literature, k-nearest neighbor, Decision Trees, Naïve Bayes, Random Forests, and Neural Network techniques, and the performance metrics such as Area Under Curve, Accuracy, Precision, Recall, and F1 Score were used as classification success. The results demonstrate that the WOA-reduced dataset outperformed the full-variable and chi-squared-derived datasets regarding performance metrics.
Additional Links: PMID-40359310
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40359310,
year = {2025},
author = {Kotan, M and Faruk Seymen, Ö and Çallı, L and Kasım, S and Çarklı Yavuz, B and Över Özçelik, T},
title = {A novel methodological approach to SaaS churn prediction using whale optimization algorithm.},
journal = {PloS one},
volume = {20},
number = {5},
pages = {e0319998},
doi = {10.1371/journal.pone.0319998},
pmid = {40359310},
issn = {1932-6203},
mesh = {*Algorithms ; *Software ; *Cloud Computing ; Bayes Theorem ; Neural Networks, Computer ; Decision Trees ; Whales ; },
abstract = {Customer churn is a critical concern in the Software as a Service (SaaS) sector, potentially impacting long-term growth within the cloud computing industry. The scarcity of research on customer churn models in SaaS, particularly regarding diverse feature selection methods and predictive algorithms, highlights a significant gap. Addressing this would enhance academic discourse and provide essential insights for managerial decision-making. This study introduces a novel approach to SaaS churn prediction using the Whale Optimization Algorithm (WOA) for feature selection. Results show that WOA-reduced datasets improve processing efficiency and outperform full-variable datasets in predictive performance. The study encompasses a range of prediction techniques with three distinct datasets evaluated derived from over 1,000 users of a multinational SaaS company: the WOA-reduced dataset, the full-variable dataset, and the chi-squared-derived dataset. These three datasets were examined with the most used in literature, k-nearest neighbor, Decision Trees, Naïve Bayes, Random Forests, and Neural Network techniques, and the performance metrics such as Area Under Curve, Accuracy, Precision, Recall, and F1 Score were used as classification success. The results demonstrate that the WOA-reduced dataset outperformed the full-variable and chi-squared-derived datasets regarding performance metrics.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Algorithms
*Software
*Cloud Computing
Bayes Theorem
Neural Networks, Computer
Decision Trees
Whales
RevDate: 2025-05-12
From Cadavers to Codes: The Evolution of Anatomy Education Through Digital Technologies.
Medical science educator, 35(2):1101-1109 pii:2268.
This review examines the shift from traditional anatomy education to the integration of advanced digital technologies. With rapid advancements in digital tools, such as 3D models, virtual dissections, augmented reality (AR) and virtual reality (VR), anatomy education is increasingly adopting digital environments to enhance learning. These tools offer immersive, interactive experiences, supporting active learning and knowledge retention. Mobile technology and cloud computing have further increased accessibility, allowing flexible, self-paced learning. Despite challenges like educator resistance and institutional barriers, the continued innovation and integration of digital tools have the potential to transform anatomy education and improve medical outcomes.
Additional Links: PMID-40353020
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40353020,
year = {2025},
author = {Al-Rubaie, A},
title = {From Cadavers to Codes: The Evolution of Anatomy Education Through Digital Technologies.},
journal = {Medical science educator},
volume = {35},
number = {2},
pages = {1101-1109},
doi = {10.1007/s40670-024-02268-6},
pmid = {40353020},
issn = {2156-8650},
abstract = {This review examines the shift from traditional anatomy education to the integration of advanced digital technologies. With rapid advancements in digital tools, such as 3D models, virtual dissections, augmented reality (AR) and virtual reality (VR), anatomy education is increasingly adopting digital environments to enhance learning. These tools offer immersive, interactive experiences, supporting active learning and knowledge retention. Mobile technology and cloud computing have further increased accessibility, allowing flexible, self-paced learning. Despite challenges like educator resistance and institutional barriers, the continued innovation and integration of digital tools have the potential to transform anatomy education and improve medical outcomes.},
}
RevDate: 2025-05-12
FunDa: scalable serverless data analytics and in situ query processing.
Journal of big data, 12(1):116.
The pay-what-you-use model of serverless Cloud computing (or serverless, for short) offers significant benefits to the users. This computing paradigm is ideal for short running ephemeral tasks, however, it is not suitable for stateful long running tasks, such as complex data analytics and query processing. We propose FunDa, an on-premises serverless data analytics framework, which extends our previously proposed system for unified data analytics and in situ SQL query processing called DaskDB. Unlike existing serverless solutions, which struggle with stateful and long running data analytics tasks, FunDa overcomes their limitations. Our ongoing research focuses on developing a robust architecture for FunDa, enabling true serverless in on-premises environments, while being able to operate on a public Cloud, such as AWS Cloud. We have evaluated our system on several benchmarks with different scale factors. Our experimental results in both on-premises and AWS Cloud settings demonstrate FunDa's ability to support automatic scaling, low-latency execution of data analytics workloads, and more flexibility to serverless users.
Additional Links: PMID-40352432
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40352432,
year = {2025},
author = {Lounissi, E and Das, SK and Peter, R and Zhang, X and Ray, S and Jia, L},
title = {FunDa: scalable serverless data analytics and in situ query processing.},
journal = {Journal of big data},
volume = {12},
number = {1},
pages = {116},
doi = {10.1186/s40537-025-01141-6},
pmid = {40352432},
issn = {2196-1115},
abstract = {The pay-what-you-use model of serverless Cloud computing (or serverless, for short) offers significant benefits to the users. This computing paradigm is ideal for short running ephemeral tasks, however, it is not suitable for stateful long running tasks, such as complex data analytics and query processing. We propose FunDa, an on-premises serverless data analytics framework, which extends our previously proposed system for unified data analytics and in situ SQL query processing called DaskDB. Unlike existing serverless solutions, which struggle with stateful and long running data analytics tasks, FunDa overcomes their limitations. Our ongoing research focuses on developing a robust architecture for FunDa, enabling true serverless in on-premises environments, while being able to operate on a public Cloud, such as AWS Cloud. We have evaluated our system on several benchmarks with different scale factors. Our experimental results in both on-premises and AWS Cloud settings demonstrate FunDa's ability to support automatic scaling, low-latency execution of data analytics workloads, and more flexibility to serverless users.},
}
RevDate: 2025-05-10
Phylogeographic and genetic network assessment of COVID-19 mitigation protocols on SARS-CoV-2 transmission in university campus residences.
EBioMedicine, 116:105729 pii:S2352-3964(25)00173-2 [Epub ahead of print].
BACKGROUND: Congregate living provides an ideal setting for SARS-CoV-2 transmission in which many outbreaks and superspreading events occurred. To avoid large outbreaks, universities turned to remote operations during the initial COVID-19 pandemic waves in 2020 and 2021. In late-2021, the University of California San Diego (UC San Diego) facilitated the return of students to campus with comprehensive testing, vaccination, masking, wastewater surveillance, and isolation policies.
METHODS: We performed molecular epidemiological and phylogeographic analysis of 4418 SARS-CoV-2 genomes sampled from UC San Diego students during the Omicron waves between December 2021 and September 2022, representing 58% of students with confirmed SARS-CoV-2 infection. We overlaid these analyses across on-campus residential information to assess the spread and persistence of SARS-CoV-2 within university residences.
FINDINGS: Within campus residences, SARS-CoV-2 transmission was frequent among students residing in the same room or suite. However, a quarter of pairs of suitemates with concurrent infections had distantly related viruses, suggesting separate sources of infection during periods of high incidence in the surrounding community. Students with concurrent infections residing in the same building were not at substantial increased probability of being members of the same transmission cluster. Genetic network and phylogeographic inference indicated that only between 3.1 and 12.4% of infections among students could be associated with transmission within buildings outside of individual suites. The only super-spreading event we detected was related to a large event outside campus residences.
INTERPRETATION: We found little evidence for sustained SARS-CoV-2 transmission within individual buildings, aside from students who resided in the same suite. Even in the face of heightened community transmission during the 2021-2022 Omicron waves, congregate living did not result in a heightened risk for SARS-CoV-2 transmission in the context of the multi-pronged mitigation strategy.
FUNDING: SEARCH Alliance: Centers for Disease Control and Prevention (CDC) BAA (75D301-22-R-72097) and the Google Cloud Platform Research Credits Program. J.O.W.: NIH-NIAID (R01 AI135992). T.I.V.: Branco Weiss Fellowship and Newkirk Fellowship. L.L.: University of California San Diego.
Additional Links: PMID-40347833
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40347833,
year = {2025},
author = {Wertheim, JO and Vasylyeva, TI and Wood, RJ and Cantrell, K and Contreras, SP and Feldheim, A and Goyal, R and Havens, JL and Knight, R and Laurent, LC and Moshiri, N and Neuhard, R and Sathe, S and Satterlund, A and Scioscia, A and Song, AY and , and Schooley, RT and Anderson, CM and Martin, NK},
title = {Phylogeographic and genetic network assessment of COVID-19 mitigation protocols on SARS-CoV-2 transmission in university campus residences.},
journal = {EBioMedicine},
volume = {116},
number = {},
pages = {105729},
doi = {10.1016/j.ebiom.2025.105729},
pmid = {40347833},
issn = {2352-3964},
abstract = {BACKGROUND: Congregate living provides an ideal setting for SARS-CoV-2 transmission in which many outbreaks and superspreading events occurred. To avoid large outbreaks, universities turned to remote operations during the initial COVID-19 pandemic waves in 2020 and 2021. In late-2021, the University of California San Diego (UC San Diego) facilitated the return of students to campus with comprehensive testing, vaccination, masking, wastewater surveillance, and isolation policies.
METHODS: We performed molecular epidemiological and phylogeographic analysis of 4418 SARS-CoV-2 genomes sampled from UC San Diego students during the Omicron waves between December 2021 and September 2022, representing 58% of students with confirmed SARS-CoV-2 infection. We overlaid these analyses across on-campus residential information to assess the spread and persistence of SARS-CoV-2 within university residences.
FINDINGS: Within campus residences, SARS-CoV-2 transmission was frequent among students residing in the same room or suite. However, a quarter of pairs of suitemates with concurrent infections had distantly related viruses, suggesting separate sources of infection during periods of high incidence in the surrounding community. Students with concurrent infections residing in the same building were not at substantial increased probability of being members of the same transmission cluster. Genetic network and phylogeographic inference indicated that only between 3.1 and 12.4% of infections among students could be associated with transmission within buildings outside of individual suites. The only super-spreading event we detected was related to a large event outside campus residences.
INTERPRETATION: We found little evidence for sustained SARS-CoV-2 transmission within individual buildings, aside from students who resided in the same suite. Even in the face of heightened community transmission during the 2021-2022 Omicron waves, congregate living did not result in a heightened risk for SARS-CoV-2 transmission in the context of the multi-pronged mitigation strategy.
FUNDING: SEARCH Alliance: Centers for Disease Control and Prevention (CDC) BAA (75D301-22-R-72097) and the Google Cloud Platform Research Credits Program. J.O.W.: NIH-NIAID (R01 AI135992). T.I.V.: Branco Weiss Fellowship and Newkirk Fellowship. L.L.: University of California San Diego.},
}
RevDate: 2025-05-09
Privacy-preserving and verifiable spectral graph analysis in the cloud.
Scientific reports, 15(1):16237.
Resorting to cloud computing for spectral graph analysis on large-scale graph data is becoming increasingly popular. However, given the intrusive and opaque natures of cloud services, privacy, and misbehaving cloud that returns incorrect results have raised serious concerns. Current schemes are proposed for privacy alone under the semi-honest model, while disregarding the realistic threat posed by the misbehaving cloud that might skip computationally intensive operations for economic gain. Additionally, existing verifiable computation techniques prove inadequate for the specialized requirements of spectral graph analysis, either due to compatibility issues with privacy-preserving protocols or the excessive computational burden they impose on resource-constrained users. To tackle the above two issues in a holistic solution, we present, tailor, and evaluate PVG, a privacy-preserving and verifiable framework for spectral graph analytics in the cloud for the first time. PVG concentrates on the eigendecomposition process, and provides strong privacy for graph data while enabling users to validate the accuracy of the outcomes yielded by the cloud. For this, we first design a new additive publicly verifiable computation algorithm, APVC, that can verify the accuracy of the result of the core operation (matrix multiplication) in eigendecomposition returned by cloud servers. We then propose three secure and verifiable functions for eigendecomposition based on APVC and lightweight cryptography. Extensive experiments on three manually generated and two real-world social graph datasets indicate that PVG's accuracy is consistent with plaintext, with practically affordable performance superior to prior art.
Additional Links: PMID-40346106
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40346106,
year = {2025},
author = {Song, Y},
title = {Privacy-preserving and verifiable spectral graph analysis in the cloud.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {16237},
pmid = {40346106},
issn = {2045-2322},
abstract = {Resorting to cloud computing for spectral graph analysis on large-scale graph data is becoming increasingly popular. However, given the intrusive and opaque natures of cloud services, privacy, and misbehaving cloud that returns incorrect results have raised serious concerns. Current schemes are proposed for privacy alone under the semi-honest model, while disregarding the realistic threat posed by the misbehaving cloud that might skip computationally intensive operations for economic gain. Additionally, existing verifiable computation techniques prove inadequate for the specialized requirements of spectral graph analysis, either due to compatibility issues with privacy-preserving protocols or the excessive computational burden they impose on resource-constrained users. To tackle the above two issues in a holistic solution, we present, tailor, and evaluate PVG, a privacy-preserving and verifiable framework for spectral graph analytics in the cloud for the first time. PVG concentrates on the eigendecomposition process, and provides strong privacy for graph data while enabling users to validate the accuracy of the outcomes yielded by the cloud. For this, we first design a new additive publicly verifiable computation algorithm, APVC, that can verify the accuracy of the result of the core operation (matrix multiplication) in eigendecomposition returned by cloud servers. We then propose three secure and verifiable functions for eigendecomposition based on APVC and lightweight cryptography. Extensive experiments on three manually generated and two real-world social graph datasets indicate that PVG's accuracy is consistent with plaintext, with practically affordable performance superior to prior art.},
}
RevDate: 2025-05-09
Celeste : A cloud-based genomics infrastructure with variant-calling pipeline suited for population-scale sequencing projects.
medRxiv : the preprint server for health sciences pii:2025.04.29.25326690.
BACKGROUND: The All of Us Research Program (All of Us) is one of the world's largest sequencing efforts that will generate genetic data for over one million individuals from diverse backgrounds. This historic megaproject will create novel research platforms that integrate an unprecedented amount of genetic data with longitudinal health information. Here, we describe the design of Celeste , a resilient, open-source cloud architecture for implementing genomics workflows that has successfully analyzed petabytes of participant genomic information for All of Us - thereby enabling other large-scale sequencing efforts with a comprehensive set of tools to power analysis. The Celeste infrastructure is tremendously scalable and has routinely processed fluctuating workloads of up to 9,000 whole-genome sequencing (WGS) samples for All of Us , monthly. It also lends itself to multiple projects. Serverless technology and container orchestration form the basis of Celeste 's system for managing this volume of data.
RESULTS: In 12 months of production (within a single Amazon Web Services (AWS) Region), around 200 million serverless functions and over 20 million messages coordinated the analysis of 1.8 million bioinformatics, quality control, and clinical reporting jobs. Adapting WGS analysis to clinical projects requires adaptation of variant-calling methods to enrich the reliable detection of variants with known clinical importance. Thus, we also share the process by which we tuned the variant-calling pipeline in use by the multiple genome centers supporting All of Us to maximize precision and accuracy for low fraction variant calls with clinical significance.
CONCLUSIONS: When combined with hardware-accelerated implementations for genomic analysis, Celeste had far-reaching, positive implications for turn-around time, dynamic scalability, security, and storage of analysis for one hundred-thousand whole-genome samples and counting. Other groups may align their sequencing workflows to this harmonized pipeline standard, included within the Celeste framework, to meet clinical requisites for population-scale sequencing efforts. Celeste is available as an Amazon Web Services (AWS) deployment in GitHub, and includes command-line parameters and software containers.
Additional Links: PMID-40343041
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40343041,
year = {2025},
author = {Siddiqui, N and Lee, B and Yi, V and Farek, J and Khan, Z and Kalla, SE and Wang, Q and Walker, K and Meldrim, J and Kachulis, C and Gatzen, M and Lennon, NJ and Mehtalia, S and Catreux, S and Mehio, R and Gibbs, RA and Venner, E},
title = {Celeste : A cloud-based genomics infrastructure with variant-calling pipeline suited for population-scale sequencing projects.},
journal = {medRxiv : the preprint server for health sciences},
volume = {},
number = {},
pages = {},
doi = {10.1101/2025.04.29.25326690},
pmid = {40343041},
abstract = {BACKGROUND: The All of Us Research Program (All of Us) is one of the world's largest sequencing efforts that will generate genetic data for over one million individuals from diverse backgrounds. This historic megaproject will create novel research platforms that integrate an unprecedented amount of genetic data with longitudinal health information. Here, we describe the design of Celeste , a resilient, open-source cloud architecture for implementing genomics workflows that has successfully analyzed petabytes of participant genomic information for All of Us - thereby enabling other large-scale sequencing efforts with a comprehensive set of tools to power analysis. The Celeste infrastructure is tremendously scalable and has routinely processed fluctuating workloads of up to 9,000 whole-genome sequencing (WGS) samples for All of Us , monthly. It also lends itself to multiple projects. Serverless technology and container orchestration form the basis of Celeste 's system for managing this volume of data.
RESULTS: In 12 months of production (within a single Amazon Web Services (AWS) Region), around 200 million serverless functions and over 20 million messages coordinated the analysis of 1.8 million bioinformatics, quality control, and clinical reporting jobs. Adapting WGS analysis to clinical projects requires adaptation of variant-calling methods to enrich the reliable detection of variants with known clinical importance. Thus, we also share the process by which we tuned the variant-calling pipeline in use by the multiple genome centers supporting All of Us to maximize precision and accuracy for low fraction variant calls with clinical significance.
CONCLUSIONS: When combined with hardware-accelerated implementations for genomic analysis, Celeste had far-reaching, positive implications for turn-around time, dynamic scalability, security, and storage of analysis for one hundred-thousand whole-genome samples and counting. Other groups may align their sequencing workflows to this harmonized pipeline standard, included within the Celeste framework, to meet clinical requisites for population-scale sequencing efforts. Celeste is available as an Amazon Web Services (AWS) deployment in GitHub, and includes command-line parameters and software containers.},
}
RevDate: 2025-05-09
rMATS-cloud: Large-scale Alternative Splicing Analysis in the Cloud.
Genomics, proteomics & bioinformatics pii:8127209 [Epub ahead of print].
Although gene expression analysis pipelines are often a standard part of bioinformatics analysis, with many publicly available cloud workflows, cloud-based alternative splicing analysis tools remain limited. Our lab released rMATS in 2014 and has continuously maintained it, providing a fast and versatile solution for quantifying alternative splicing from RNA sequencing (RNA-seq) data. Here, we present rMATS-cloud, a portable version of the rMATS workflow that can be run in virtually any cloud environment suited for biomedical research. We compared the time and cost of running rMATS-cloud with two RNA-seq datasets on three different platforms (Cavatica, Terra, and Seqera). Our findings demonstrate that rMATS-cloud handles RNA-seq datasets with thousands of samples, and therefore is ideally suited for the storage capacities of many cloud data repositories. rMATS-cloud is available at https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-cwl, https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-wdl, and https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-nextflow.
Additional Links: PMID-40341961
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40341961,
year = {2025},
author = {Adams, JI and Kutschera, E and Hu, Q and Liu, CJ and Liu, Q and Kadash-Edmondson, K and Liu, S and Xing, Y},
title = {rMATS-cloud: Large-scale Alternative Splicing Analysis in the Cloud.},
journal = {Genomics, proteomics & bioinformatics},
volume = {},
number = {},
pages = {},
doi = {10.1093/gpbjnl/qzaf036},
pmid = {40341961},
issn = {2210-3244},
abstract = {Although gene expression analysis pipelines are often a standard part of bioinformatics analysis, with many publicly available cloud workflows, cloud-based alternative splicing analysis tools remain limited. Our lab released rMATS in 2014 and has continuously maintained it, providing a fast and versatile solution for quantifying alternative splicing from RNA sequencing (RNA-seq) data. Here, we present rMATS-cloud, a portable version of the rMATS workflow that can be run in virtually any cloud environment suited for biomedical research. We compared the time and cost of running rMATS-cloud with two RNA-seq datasets on three different platforms (Cavatica, Terra, and Seqera). Our findings demonstrate that rMATS-cloud handles RNA-seq datasets with thousands of samples, and therefore is ideally suited for the storage capacities of many cloud data repositories. rMATS-cloud is available at https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-cwl, https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-wdl, and https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-nextflow.},
}
RevDate: 2025-05-09
Visibility-Aware Multi-View Stereo by Surface Normal Weighting for Occlusion Robustness.
IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].
Recent learning-based multi-view stereo (MVS) still exhibits insufficient accuracy in large occlusion cases, such as environments with significant inter-camera distance or when capturing objects with complex shapes. This is because incorrect image features extracted from occluded areas serve as significant noise in the cost volume construction. To address this, we propose a visibility-aware MVS using surface normal weighting (SnowMVSNet) based on explicit 3D geometry. It selectively suppresses mismatched features in the cost volume construction by computing inter-view visibility. Additionally, we present a geometry-guided cost volume regularization that enhances true depth among depth hypotheses using a surface normal prior. We also propose intra-view visibility that distinguishes geometrically more visible pixels within a reference view. Using intra-view visibility, we introduce the visibility-weighted training and depth estimation methods. These methods enable the network to achieve accurate 3D point cloud reconstruction by focusing on visible regions. Based on simple inter-view and intra-view visibility computations, SnowMVSNet accomplishes substantial performance improvements relative to computational complexity, particularly in terms of occlusion robustness. To evaluate occlusion robustness, we constructed a multi-view human (MVHuman) dataset containing general human body shapes prone to self-occlusion. Extensive experiments demonstrated that SnowMVSNet significantly outperformed state-of-the-art methods in both low- and high-occlusion scenarios.
Additional Links: PMID-40338714
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40338714,
year = {2025},
author = {Lee, H and Lee, S and Lee, S},
title = {Visibility-Aware Multi-View Stereo by Surface Normal Weighting for Occlusion Robustness.},
journal = {IEEE transactions on pattern analysis and machine intelligence},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TPAMI.2025.3568447},
pmid = {40338714},
issn = {1939-3539},
abstract = {Recent learning-based multi-view stereo (MVS) still exhibits insufficient accuracy in large occlusion cases, such as environments with significant inter-camera distance or when capturing objects with complex shapes. This is because incorrect image features extracted from occluded areas serve as significant noise in the cost volume construction. To address this, we propose a visibility-aware MVS using surface normal weighting (SnowMVSNet) based on explicit 3D geometry. It selectively suppresses mismatched features in the cost volume construction by computing inter-view visibility. Additionally, we present a geometry-guided cost volume regularization that enhances true depth among depth hypotheses using a surface normal prior. We also propose intra-view visibility that distinguishes geometrically more visible pixels within a reference view. Using intra-view visibility, we introduce the visibility-weighted training and depth estimation methods. These methods enable the network to achieve accurate 3D point cloud reconstruction by focusing on visible regions. Based on simple inter-view and intra-view visibility computations, SnowMVSNet accomplishes substantial performance improvements relative to computational complexity, particularly in terms of occlusion robustness. To evaluate occlusion robustness, we constructed a multi-view human (MVHuman) dataset containing general human body shapes prone to self-occlusion. Extensive experiments demonstrated that SnowMVSNet significantly outperformed state-of-the-art methods in both low- and high-occlusion scenarios.},
}
RevDate: 2025-05-08
InDeepNet: a web platform for predicting functional binding sites in proteins using InDeep.
Nucleic acids research pii:8126900 [Epub ahead of print].
Predicting functional binding sites in proteins is crucial for understanding protein-protein interactions (PPIs) and identifying drug targets. While various computational approaches exist, many fail to assess PPI ligandability, which often involves conformational changes. We introduce InDeepNet, a web-based platform integrating InDeep, a deep-learning model for binding site prediction, with InDeepHolo, which evaluates a site's propensity to adopt a ligand-bound (holo) conformation. InDeepNet provides an intuitive interface for researchers to upload protein structures from in-house data, the Protein Data Bank (PDB), or AlphaFold, predicting potential binding sites for proteins or small molecules. Results are presented as interactive 3D visualizations via Mol*, facilitating structural analysis. With InDeepHolo, the platform helps select conformations optimal for small-molecule binding, improving structure-based drug design. Accessible at https://indeep-net.gpu.pasteur.cloud/, InDeepNet removes the need for specialized coding skills or high-performance computing, making advanced predictive models widely available. By streamlining PPI target assessment and ligandability prediction, it assists research and supports therapeutic development targeting PPIs.
Additional Links: PMID-40337922
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40337922,
year = {2025},
author = {Mareuil, F and Torchet, R and Ruano, LC and Mallet, V and Nilges, M and Bouvier, G and Sperandio, O},
title = {InDeepNet: a web platform for predicting functional binding sites in proteins using InDeep.},
journal = {Nucleic acids research},
volume = {},
number = {},
pages = {},
doi = {10.1093/nar/gkaf403},
pmid = {40337922},
issn = {1362-4962},
support = {//Dassault Systèmes La Fondation/ ; PFR7//Fondation de France/ ; },
abstract = {Predicting functional binding sites in proteins is crucial for understanding protein-protein interactions (PPIs) and identifying drug targets. While various computational approaches exist, many fail to assess PPI ligandability, which often involves conformational changes. We introduce InDeepNet, a web-based platform integrating InDeep, a deep-learning model for binding site prediction, with InDeepHolo, which evaluates a site's propensity to adopt a ligand-bound (holo) conformation. InDeepNet provides an intuitive interface for researchers to upload protein structures from in-house data, the Protein Data Bank (PDB), or AlphaFold, predicting potential binding sites for proteins or small molecules. Results are presented as interactive 3D visualizations via Mol*, facilitating structural analysis. With InDeepHolo, the platform helps select conformations optimal for small-molecule binding, improving structure-based drug design. Accessible at https://indeep-net.gpu.pasteur.cloud/, InDeepNet removes the need for specialized coding skills or high-performance computing, making advanced predictive models widely available. By streamlining PPI target assessment and ligandability prediction, it assists research and supports therapeutic development targeting PPIs.},
}
RevDate: 2025-05-08
SenseRisc: An instrumented smart shirt for risk prevention in the workplace.
Wearable technologies, 6:e20 pii:S2631717625000106.
The integration of wearable smart garments with multiple sensors has gained momentum, enabling real-time monitoring of users' vital parameters across various domains. This study presents the development and validation of an instrumented smart shirt for risk prevention in workplaces designed to enhance worker safety and well-being in occupational settings. The proposed smart shirt is equipped with sensors for collecting electrocardiogram, respiratory waveform, and acceleration data, with signal conditioning electronics and Bluetooth transmission to the mobile application. The mobile application sends the data to the cloud platform for subsequent Preventive Risk Index (PRI) extraction. The proposed SenseRisc system was validated with eight healthy participants during the execution of different physically exerting activities to assess the capability of the system to capture physiological parameters and estimate the PRI of the worker, and user subjective perception of the instrumented intelligent shirt.
Additional Links: PMID-40336969
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40336969,
year = {2025},
author = {Tamantini, C and Marra, F and Di Tocco, J and Di Modica, S and Lanata, A and Cordella, F and Ferrarin, M and Rizzo, F and Stefanelli, M and Papacchini, M and Delle Site, C and Tamburrano, A and Massaroni, C and Schena, E and Zollo, L and Sarto, MS},
title = {SenseRisc: An instrumented smart shirt for risk prevention in the workplace.},
journal = {Wearable technologies},
volume = {6},
number = {},
pages = {e20},
doi = {10.1017/wtc.2025.10},
pmid = {40336969},
issn = {2631-7176},
abstract = {The integration of wearable smart garments with multiple sensors has gained momentum, enabling real-time monitoring of users' vital parameters across various domains. This study presents the development and validation of an instrumented smart shirt for risk prevention in workplaces designed to enhance worker safety and well-being in occupational settings. The proposed smart shirt is equipped with sensors for collecting electrocardiogram, respiratory waveform, and acceleration data, with signal conditioning electronics and Bluetooth transmission to the mobile application. The mobile application sends the data to the cloud platform for subsequent Preventive Risk Index (PRI) extraction. The proposed SenseRisc system was validated with eight healthy participants during the execution of different physically exerting activities to assess the capability of the system to capture physiological parameters and estimate the PRI of the worker, and user subjective perception of the instrumented intelligent shirt.},
}
RevDate: 2025-05-07
Hybrid multi objective marine predators algorithm based clustering for lightweight resource scheduling and application placement in fog.
Scientific reports, 15(1):15953.
The Internet of Things (IoT) has boosted fog computing, which complements the cloud. This is critical for applications that need close user proximity. Efficient allocation of IoT applications to the fog, as well as fog device scheduling, enabling the realistic execution of IoT application deployment in the fog environment. The scheduling difficulties are multi-objective in nature, since they must handle the issues of avoiding resource waste, network latency, and maximising Quality of Service (QoS) on fog nodes. In this research, the Hybrid Multi-Objective Marine Predators Algorithm-based Clustering and Fog Picker (HMMPACFP) Technique is developed as a combinatorial model for tackling the problem of fog node allocation, with the goal of achieving dynamic scheduling using lightweight characteristics. Utilised Fog Picker to allocate IoT components to fog nodes based on QoS parameters. Simulation trials of the proposed HMMPACFP scheme utilising iMetal and iFogSim with Hypervolume (HV) and Generational Distance (IGD) demonstrated its superiority over the benchmarked methodologies utilised for evaluation. The combination of Fog Picker with the suggested HMMPACFP scheme resulted in 32.18% faster convergence, 26.92% more solution variety, and a better balance between exploration and exploitation rates.
Additional Links: PMID-40335531
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40335531,
year = {2025},
author = {Baskar, R and Mohanraj, E},
title = {Hybrid multi objective marine predators algorithm based clustering for lightweight resource scheduling and application placement in fog.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {15953},
pmid = {40335531},
issn = {2045-2322},
abstract = {The Internet of Things (IoT) has boosted fog computing, which complements the cloud. This is critical for applications that need close user proximity. Efficient allocation of IoT applications to the fog, as well as fog device scheduling, enabling the realistic execution of IoT application deployment in the fog environment. The scheduling difficulties are multi-objective in nature, since they must handle the issues of avoiding resource waste, network latency, and maximising Quality of Service (QoS) on fog nodes. In this research, the Hybrid Multi-Objective Marine Predators Algorithm-based Clustering and Fog Picker (HMMPACFP) Technique is developed as a combinatorial model for tackling the problem of fog node allocation, with the goal of achieving dynamic scheduling using lightweight characteristics. Utilised Fog Picker to allocate IoT components to fog nodes based on QoS parameters. Simulation trials of the proposed HMMPACFP scheme utilising iMetal and iFogSim with Hypervolume (HV) and Generational Distance (IGD) demonstrated its superiority over the benchmarked methodologies utilised for evaluation. The combination of Fog Picker with the suggested HMMPACFP scheme resulted in 32.18% faster convergence, 26.92% more solution variety, and a better balance between exploration and exploitation rates.},
}
RevDate: 2025-05-06
CmpDate: 2025-05-06
Spatiotemporal dynamics of Ramsar wetlands and freshwater resources: Technological innovations for ecosystem conservation.
Water environment research : a research publication of the Water Environment Federation, 97(5):e70072.
Aquatic ecosystems, particularly wetlands, are vulnerable to natural and anthropogenic influences. This study examines the Saman Bird Sanctuary and Keetham Lake, both Ramsar sites, using advanced remote sensing for water occurrence, land use and land cover (LULC), and water quality assessments. Sentinel data, processed in cloud computing, enabled land-use classification, water boundary delineation, and seasonal water occurrence mapping. A combination of Modified Normalized Difference Water Index (MNDWI), OTSU threshold segmentation, and Canny edge detection provided precise seasonal water boundaries. Study utilized a combination of the MNDWI, OTSU threshold segmentation, and Canny edge detection methods. These approaches allowed for precise delineation of seasonal water boundaries. Sixteen water quality parameters including pH, turbidity, dissolved oxygen (DO), chemical oxygen demand (COD), total hardness (TH), total alkalinity (TA), total dissolved solid (TDS), electrical conductivity (EC), phosphates (PO4), nitrate (NO3), chloride (Cl[-]), fluoride (F[-]), carbon dioxide (CO2), silica (Si), iodine (I[-]), and chromium (Cr[-]) were analyzed and compared for both sites. Results showed significant LULC changes, particularly at Saman, with scrub forest, built-up areas, and agriculture increasing, while flooded vegetation and open water declined. Significant LULC changes were observed near Marsh wetland, where positive changes up to 42.17% were seen for built-up in surrounding regions, with an increase to 5.43 ha in 2021 from 3.14 ha in 2017. Positive change was observed for scrub forests up to 21.02%, with a rise of 2.18 ha. Vegetation in the marsh region, including seasonal grasses and hydrophytes, has shown an increase in extent up to 0.39 ha with a rise of 7.12%. Spatiotemporal water occurrence was analyzed across pre-monsoon, monsoon, and post-monsoon seasons using Sentinel-1 data. The study highlights the role of remote sensing and field-based water quality monitoring in understanding ecological shifts and anthropogenic pressures on wetlands. By integrating land-use changes and water quality analysis, this research provides critical information for planning and conservation efforts. It provides vital insights for conservation planning, advocating for continued monitoring and adaptive management to sustain these critical ecosystems. PRACTITIONER POINTS: Spatiotemporal surface water occurrence at two geographically different wetlands-lake and marsh wetland; LULC and its change analysis to evaluate the impact on wetlands and its surrounding environment-positive and negative changes; Boundary delineation to examine changes and identify low-lying areas during the pre- and post-monsoon; Comparative analysis of the water quality of two different wetlands; Insectivorous plant-Utricularia stellaris, was recorded from Northern India at the Saman Bird Sanctuary for the first time.
Additional Links: PMID-40325903
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40325903,
year = {2025},
author = {Mohanty, S and Pandey, PC},
title = {Spatiotemporal dynamics of Ramsar wetlands and freshwater resources: Technological innovations for ecosystem conservation.},
journal = {Water environment research : a research publication of the Water Environment Federation},
volume = {97},
number = {5},
pages = {e70072},
doi = {10.1002/wer.70072},
pmid = {40325903},
issn = {1554-7531},
support = {NGP/TPN-30705/2019(G)//National Geospatial Program/ ; },
mesh = {*Wetlands ; *Environmental Monitoring ; *Fresh Water ; Water Quality ; *Conservation of Natural Resources/methods ; },
abstract = {Aquatic ecosystems, particularly wetlands, are vulnerable to natural and anthropogenic influences. This study examines the Saman Bird Sanctuary and Keetham Lake, both Ramsar sites, using advanced remote sensing for water occurrence, land use and land cover (LULC), and water quality assessments. Sentinel data, processed in cloud computing, enabled land-use classification, water boundary delineation, and seasonal water occurrence mapping. A combination of Modified Normalized Difference Water Index (MNDWI), OTSU threshold segmentation, and Canny edge detection provided precise seasonal water boundaries. Study utilized a combination of the MNDWI, OTSU threshold segmentation, and Canny edge detection methods. These approaches allowed for precise delineation of seasonal water boundaries. Sixteen water quality parameters including pH, turbidity, dissolved oxygen (DO), chemical oxygen demand (COD), total hardness (TH), total alkalinity (TA), total dissolved solid (TDS), electrical conductivity (EC), phosphates (PO4), nitrate (NO3), chloride (Cl[-]), fluoride (F[-]), carbon dioxide (CO2), silica (Si), iodine (I[-]), and chromium (Cr[-]) were analyzed and compared for both sites. Results showed significant LULC changes, particularly at Saman, with scrub forest, built-up areas, and agriculture increasing, while flooded vegetation and open water declined. Significant LULC changes were observed near Marsh wetland, where positive changes up to 42.17% were seen for built-up in surrounding regions, with an increase to 5.43 ha in 2021 from 3.14 ha in 2017. Positive change was observed for scrub forests up to 21.02%, with a rise of 2.18 ha. Vegetation in the marsh region, including seasonal grasses and hydrophytes, has shown an increase in extent up to 0.39 ha with a rise of 7.12%. Spatiotemporal water occurrence was analyzed across pre-monsoon, monsoon, and post-monsoon seasons using Sentinel-1 data. The study highlights the role of remote sensing and field-based water quality monitoring in understanding ecological shifts and anthropogenic pressures on wetlands. By integrating land-use changes and water quality analysis, this research provides critical information for planning and conservation efforts. It provides vital insights for conservation planning, advocating for continued monitoring and adaptive management to sustain these critical ecosystems. PRACTITIONER POINTS: Spatiotemporal surface water occurrence at two geographically different wetlands-lake and marsh wetland; LULC and its change analysis to evaluate the impact on wetlands and its surrounding environment-positive and negative changes; Boundary delineation to examine changes and identify low-lying areas during the pre- and post-monsoon; Comparative analysis of the water quality of two different wetlands; Insectivorous plant-Utricularia stellaris, was recorded from Northern India at the Saman Bird Sanctuary for the first time.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Wetlands
*Environmental Monitoring
*Fresh Water
Water Quality
*Conservation of Natural Resources/methods
RevDate: 2025-05-04
An integrated wearable fluorescence sensor for E. coli detection in catheter bags.
Biosensors & bioelectronics, 283:117539 pii:S0956-5663(25)00413-0 [Epub ahead of print].
Urinary tract infections (UTIs), including catheter-associated UTIs (CAUTIs), affect millions worldwide. Traditional diagnostic methods, like urinalysis and urine culture, have limitations-urinalysis is fast but lacks sensitivity, while urine culture is accurate but takes up to two days. Here, we present an integrated wearable fluorescence sensor to detect UTI-related bacterial infections early at the point of care by on-body monitoring. The sensor features a hardware platform with a flexible PCB that attaches to a urine catheter bag, emitting excitation light and detecting emission light of E. coli-specific enzymatic reaction for continuous monitoring. Our custom-developed smartphone application allows remote control and data transfer via Bluetooth and performs in situ data analysis without cloud computing. The performance of the device was demonstrated by detecting E. coli at concentrations of 10[0]-10[5] CFU/mL within 9 to 3.5 h, respectively, with high sensitivity and by testing the specificity using Gram-positive (i.e., Staphylococcus epidermidis) and Gram-negative (i.e., Pseudomonas aeruginosa and Klebsiella pneumoniae) pathogens. An in vitro bladder model testing was performed using E.coli-spiked human urine samples to further evaluate the device's practicality. This portable, cost-effective device has the potential to transform the clinical practice of UTI diagnosis with automated and rapid bacterial detection at the point of care.
Additional Links: PMID-40319726
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40319726,
year = {2025},
author = {Xu, W and Althumayri, M and Tarman, AY and Ceylan Koydemir, H},
title = {An integrated wearable fluorescence sensor for E. coli detection in catheter bags.},
journal = {Biosensors & bioelectronics},
volume = {283},
number = {},
pages = {117539},
doi = {10.1016/j.bios.2025.117539},
pmid = {40319726},
issn = {1873-4235},
abstract = {Urinary tract infections (UTIs), including catheter-associated UTIs (CAUTIs), affect millions worldwide. Traditional diagnostic methods, like urinalysis and urine culture, have limitations-urinalysis is fast but lacks sensitivity, while urine culture is accurate but takes up to two days. Here, we present an integrated wearable fluorescence sensor to detect UTI-related bacterial infections early at the point of care by on-body monitoring. The sensor features a hardware platform with a flexible PCB that attaches to a urine catheter bag, emitting excitation light and detecting emission light of E. coli-specific enzymatic reaction for continuous monitoring. Our custom-developed smartphone application allows remote control and data transfer via Bluetooth and performs in situ data analysis without cloud computing. The performance of the device was demonstrated by detecting E. coli at concentrations of 10[0]-10[5] CFU/mL within 9 to 3.5 h, respectively, with high sensitivity and by testing the specificity using Gram-positive (i.e., Staphylococcus epidermidis) and Gram-negative (i.e., Pseudomonas aeruginosa and Klebsiella pneumoniae) pathogens. An in vitro bladder model testing was performed using E.coli-spiked human urine samples to further evaluate the device's practicality. This portable, cost-effective device has the potential to transform the clinical practice of UTI diagnosis with automated and rapid bacterial detection at the point of care.},
}
RevDate: 2025-05-03
Transforming Military Healthcare Education and Training: AI Integration for Future Readiness.
Military medicine pii:8124498 [Epub ahead of print].
INTRODUCTION: Artificial intelligence (AI) technologies have spread throughout the world and changed the way that many social functions are conducted, including health care. Future large-scale combat missions will likely require health care professionals to utilize AI tools among other tools in providing care for the Warfighter. Despite the need for an AI-capable health care force, medical education lacks an integration of medical AI knowledge. The purpose of this manuscript was to review ways that military health care education can be improved with an understanding of and using AI technologies.
MATERIALS AND METHODS: This article is a review of the literature regarding the integration of AI technologies in medicine and medical education. We do provide examples of quotes and images from a larger USU study on a Faculty Development program centered on learning about AI technologies in health care education. The study is not complete and is not the focus of this article, but was approved by the USU IRB.
RESULTS: Effective integration of AI technologies in military health care education requires military health care educators that are willing to learn how to safely, effectively, and ethically use AI technologies in their own administrative, educational, research, and clinical roles. Together with health care trainees, these faculties can help to build and co-create AI-integrated curricula that will accelerate and enhance the military health care curriculum of tomorrow. Trainees can begin to use generative AI tools, like large language models, to begin to develop their skills and practice the art of generating high-quality AI tools that will improve their studies and prepare them to improve military health care. Integration of AI technologies in the military health care environment requires close military-industry collaborations with AI and security experts to ensure personal and health care information security. Through secure cloud computing, blockchain technologies, and Application Programming Interfaces, among other technologies, military health care facilities and systems can safely integrate AI technologies to enhance patient care, clinical research, and health care education.
CONCLUSIONS: AI technologies are not a dream of the future, they are here, and they are being integrated and implemented in military health care systems. To best prepare the military health care professionals of the future for the reality of medical AI, we must reform military health care education through a combined effort of faculty, students, and industry partners.
Additional Links: PMID-40317230
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40317230,
year = {2025},
author = {Peacock, JG and Cole, R and Duncan, J and Jensen, B and Snively, B and Samuel, A},
title = {Transforming Military Healthcare Education and Training: AI Integration for Future Readiness.},
journal = {Military medicine},
volume = {},
number = {},
pages = {},
doi = {10.1093/milmed/usaf169},
pmid = {40317230},
issn = {1930-613X},
abstract = {INTRODUCTION: Artificial intelligence (AI) technologies have spread throughout the world and changed the way that many social functions are conducted, including health care. Future large-scale combat missions will likely require health care professionals to utilize AI tools among other tools in providing care for the Warfighter. Despite the need for an AI-capable health care force, medical education lacks an integration of medical AI knowledge. The purpose of this manuscript was to review ways that military health care education can be improved with an understanding of and using AI technologies.
MATERIALS AND METHODS: This article is a review of the literature regarding the integration of AI technologies in medicine and medical education. We do provide examples of quotes and images from a larger USU study on a Faculty Development program centered on learning about AI technologies in health care education. The study is not complete and is not the focus of this article, but was approved by the USU IRB.
RESULTS: Effective integration of AI technologies in military health care education requires military health care educators that are willing to learn how to safely, effectively, and ethically use AI technologies in their own administrative, educational, research, and clinical roles. Together with health care trainees, these faculties can help to build and co-create AI-integrated curricula that will accelerate and enhance the military health care curriculum of tomorrow. Trainees can begin to use generative AI tools, like large language models, to begin to develop their skills and practice the art of generating high-quality AI tools that will improve their studies and prepare them to improve military health care. Integration of AI technologies in the military health care environment requires close military-industry collaborations with AI and security experts to ensure personal and health care information security. Through secure cloud computing, blockchain technologies, and Application Programming Interfaces, among other technologies, military health care facilities and systems can safely integrate AI technologies to enhance patient care, clinical research, and health care education.
CONCLUSIONS: AI technologies are not a dream of the future, they are here, and they are being integrated and implemented in military health care systems. To best prepare the military health care professionals of the future for the reality of medical AI, we must reform military health care education through a combined effort of faculty, students, and industry partners.},
}
RevDate: 2025-05-02
General 3D Vision-Language Model with Fast Rendering and Pre-training Vision-Language Alignment.
IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].
Deep neural network models have achieved remarkable progress in 3D scene understanding while trained in the closed-set setting and with full labels. However, the major bottleneck for the current 3D recognition approach is that these models do not have the capacity to recognize any unseen novel classes beyond the training categories in diverse real-world applications. In the meantime, current state-of-the-art 3D scene understanding approaches primarily require a large number of high-quality labels to train neural networks, which merely perform well in a fully supervised manner. Therefore, we are in urgent need of a framework that can simultaneously be applicable to both 3D point cloud segmentation and detection, particularly in the circumstances where the labels are rather scarce. This work presents a generalized and straightforward framework for dealing with 3D scene understanding when the labeled scenes are quite limited. To extract knowledge for novel categories from the pre-trained vision-language models, we propose a hierarchical feature-aligned pre-training and knowledge distillation strategy to extract and distill meaningful information from large-scale vision-language models, which helps benefit the open-vocabulary scene understanding tasks. To leverage the boundary information, we propose a novel energy-based loss with boundary awareness benefiting from the region-level boundary predictions. To encourage latent instance discrimination and to guarantee efficiency, we propose the unsupervised region-level semantic contrastive learning scheme for point clouds, using confident predictions of the neural network to discriminate the intermediate feature embeddings at multiple stages. In the limited reconstruction case, our proposed approach, termed WS3D++, ranks 1st on the large-scale ScanNet benchmark on both the task of semantic segmentation and instance segmentation. Also, our proposed WS3D++ achieves state-of-the-art data-efficient learning performance on the other large-scale real-scene indoor and outdoor datasets S3DIS and SemanticKITTI. Extensive experiments with both indoor and outdoor scenes demonstrated the effectiveness of our approach in both data-efficient learning and open-world few-shot learning. All codes, models, and data are to made publicly available at: https://github.com/KangchengLiu. The code is at: https://drive.google.com/drive/folders/1M58V-PtR8DBEwD296zJkNg_m2qq-MTAP Code link.
Additional Links: PMID-40315072
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40315072,
year = {2025},
author = {Liu, K and Liu, YJ and Chen, B},
title = {General 3D Vision-Language Model with Fast Rendering and Pre-training Vision-Language Alignment.},
journal = {IEEE transactions on pattern analysis and machine intelligence},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TPAMI.2025.3566593},
pmid = {40315072},
issn = {1939-3539},
abstract = {Deep neural network models have achieved remarkable progress in 3D scene understanding while trained in the closed-set setting and with full labels. However, the major bottleneck for the current 3D recognition approach is that these models do not have the capacity to recognize any unseen novel classes beyond the training categories in diverse real-world applications. In the meantime, current state-of-the-art 3D scene understanding approaches primarily require a large number of high-quality labels to train neural networks, which merely perform well in a fully supervised manner. Therefore, we are in urgent need of a framework that can simultaneously be applicable to both 3D point cloud segmentation and detection, particularly in the circumstances where the labels are rather scarce. This work presents a generalized and straightforward framework for dealing with 3D scene understanding when the labeled scenes are quite limited. To extract knowledge for novel categories from the pre-trained vision-language models, we propose a hierarchical feature-aligned pre-training and knowledge distillation strategy to extract and distill meaningful information from large-scale vision-language models, which helps benefit the open-vocabulary scene understanding tasks. To leverage the boundary information, we propose a novel energy-based loss with boundary awareness benefiting from the region-level boundary predictions. To encourage latent instance discrimination and to guarantee efficiency, we propose the unsupervised region-level semantic contrastive learning scheme for point clouds, using confident predictions of the neural network to discriminate the intermediate feature embeddings at multiple stages. In the limited reconstruction case, our proposed approach, termed WS3D++, ranks 1st on the large-scale ScanNet benchmark on both the task of semantic segmentation and instance segmentation. Also, our proposed WS3D++ achieves state-of-the-art data-efficient learning performance on the other large-scale real-scene indoor and outdoor datasets S3DIS and SemanticKITTI. Extensive experiments with both indoor and outdoor scenes demonstrated the effectiveness of our approach in both data-efficient learning and open-world few-shot learning. All codes, models, and data are to made publicly available at: https://github.com/KangchengLiu. The code is at: https://drive.google.com/drive/folders/1M58V-PtR8DBEwD296zJkNg_m2qq-MTAP Code link.},
}
RevDate: 2025-05-02
HUNHODRL: Energy efficient resource distribution in a cloud environment using hybrid optimized deep reinforcement model with HunterPlus scheduler.
Network (Bristol, England) [Epub ahead of print].
Resource optimization and workload balancing in cloud computing environments necessitate efficient management of resources to minimize energy wastage and SLA (Service Level Agreement) violations. The existing scheduling techniques often face challenges with dynamic resource allocations and lead to inefficient job completion rates and container utilizations. Hence, this framework has been proposed to establish HUNHODRL, a newly-minted DRL-based framework that aims to improve container orchestration and workload allocation. The evaluation of this framework was done against HUNDRL, Bi-GGCN, and CNN methods comparatively under two sets of workloads with datasets on CPU, Memory, and Disk I/O utilization metrics. The model optimizes scheduling choices in HUNHODRL through a combination of destination host capacity vector and active job utilization matrix. The experimental results show that HUNHODRL outperforms existing models in container creation rate, job completion rate, SLA violation reduction, and energy efficiency. It facilitates increased container creation efficiency without increasing the energy costs of VM deployments. This method dynamically adapts itself and modifies the scheduling strategy to optimize performance amid varying workloads, thus establishing its scalability and robustness. A comparative analysis has demonstrated higher job completion rates against CNN, Bi-GGCNN, and HUNDRL, establishing the potential of DRL-based resource allocation. The significant gain in cloud resource utilization and energy-efficient task execution makes HUNHODRL and its suitable solution for next-generation cloud computing infrastructure.
Additional Links: PMID-40126006
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40126006,
year = {2025},
author = {Chellamuthu, S and Ramanathan, K and Arivanandhan, R},
title = {HUNHODRL: Energy efficient resource distribution in a cloud environment using hybrid optimized deep reinforcement model with HunterPlus scheduler.},
journal = {Network (Bristol, England)},
volume = {},
number = {},
pages = {1-26},
doi = {10.1080/0954898X.2025.2480294},
pmid = {40126006},
issn = {1361-6536},
abstract = {Resource optimization and workload balancing in cloud computing environments necessitate efficient management of resources to minimize energy wastage and SLA (Service Level Agreement) violations. The existing scheduling techniques often face challenges with dynamic resource allocations and lead to inefficient job completion rates and container utilizations. Hence, this framework has been proposed to establish HUNHODRL, a newly-minted DRL-based framework that aims to improve container orchestration and workload allocation. The evaluation of this framework was done against HUNDRL, Bi-GGCN, and CNN methods comparatively under two sets of workloads with datasets on CPU, Memory, and Disk I/O utilization metrics. The model optimizes scheduling choices in HUNHODRL through a combination of destination host capacity vector and active job utilization matrix. The experimental results show that HUNHODRL outperforms existing models in container creation rate, job completion rate, SLA violation reduction, and energy efficiency. It facilitates increased container creation efficiency without increasing the energy costs of VM deployments. This method dynamically adapts itself and modifies the scheduling strategy to optimize performance amid varying workloads, thus establishing its scalability and robustness. A comparative analysis has demonstrated higher job completion rates against CNN, Bi-GGCNN, and HUNDRL, establishing the potential of DRL-based resource allocation. The significant gain in cloud resource utilization and energy-efficient task execution makes HUNHODRL and its suitable solution for next-generation cloud computing infrastructure.},
}
RevDate: 2025-05-01
Novel load balancing mechanism for cloud networks using dilated and attention-based federated learning with Coati Optimization.
Scientific reports, 15(1):15268.
Load balancing (LB) is a critical aspect of Cloud Computing (CC), enabling efficient access to virtualized resources over the internet. It ensures optimal resource utilization and smooth system operation by distributing workloads across multiple servers, preventing any server from being overburdened or underutilized. This process enhances system reliability, resource efficiency, and overall performance. As cloud computing expands, effective resource management becomes increasingly important, particularly in distributed environments. This study proposes a novel approach to resource prediction for cloud network load balancing, incorporating federated learning within a blockchain framework for secure and distributed management. The model leverages Dilated and Attention-based 1-Dimensional Convolutional Neural Networks with bidirectional long short-term memory (DA-DBL) to predict resource needs based on factors such as processing time, reaction time, and resource availability. The integration of the Random Opposition Coati Optimization Algorithm (RO-COA) enables flexible and efficient load distribution in response to real-time network changes. The proposed method is evaluated on various metrics, including active servers, makespan, Quality of Service (QoS), resource utilization, and power consumption, outperforming existing approaches. The results demonstrate that the combination of federated learning and the RO-COA-based load balancing method offers a robust solution for enhancing cloud resource management.
Additional Links: PMID-40312585
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40312585,
year = {2025},
author = {Kathole, AB and Singh, VK and Goyal, A and Kant, S and Savyanavar, AS and Ubale, SA and Jain, P and Islam, MT},
title = {Novel load balancing mechanism for cloud networks using dilated and attention-based federated learning with Coati Optimization.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {15268},
pmid = {40312585},
issn = {2045-2322},
support = {DPK-2022-006//Dana Padanan Kolaborasi (DPK)/ ; },
abstract = {Load balancing (LB) is a critical aspect of Cloud Computing (CC), enabling efficient access to virtualized resources over the internet. It ensures optimal resource utilization and smooth system operation by distributing workloads across multiple servers, preventing any server from being overburdened or underutilized. This process enhances system reliability, resource efficiency, and overall performance. As cloud computing expands, effective resource management becomes increasingly important, particularly in distributed environments. This study proposes a novel approach to resource prediction for cloud network load balancing, incorporating federated learning within a blockchain framework for secure and distributed management. The model leverages Dilated and Attention-based 1-Dimensional Convolutional Neural Networks with bidirectional long short-term memory (DA-DBL) to predict resource needs based on factors such as processing time, reaction time, and resource availability. The integration of the Random Opposition Coati Optimization Algorithm (RO-COA) enables flexible and efficient load distribution in response to real-time network changes. The proposed method is evaluated on various metrics, including active servers, makespan, Quality of Service (QoS), resource utilization, and power consumption, outperforming existing approaches. The results demonstrate that the combination of federated learning and the RO-COA-based load balancing method offers a robust solution for enhancing cloud resource management.},
}
RevDate: 2025-04-29
Performance and energy optimization of ternary optical computers based on tandem queuing system.
Scientific reports, 15(1):15037.
As an emerging computer technology with numerous bits, bit-wise allocation, and extensive parallelism, the ternary optical computer (TOC) will play an important role in platforms such as cloud computing and big data. Previous studies on TOC in handling computational request tasks have mainly focused on performance enhancement while ignoring the impact of performance enhancement on power consumption. The main objective of this study is to investigate the optimization trade-off between performance and energy consumption in TOC systems. To this end, the service model of the TOC is constructed by introducing the M/M/1 and M/M/c models in queuing theory, combined with the framework of the tandem queueing system, and the optimization problem is studied by adjusting the processor partitioning strategy and the number of small TOC (STOC) in the service process. The results show that the value of increasing active STOCs is prominent when system performance significantly depends on response time. However, marginal gains decrease as the number of STOCs grows, accompanied by rising energy costs. Based on these findings, this paper constructs a bi-objective optimization model using response time and energy consumption. It proposes an optimization strategy to achieve bi-objective optimization of performance and energy consumption for TOC by identifying the optimal partitioning strategy and the number of active small optical processors for different load conditions.
Additional Links: PMID-40301430
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40301430,
year = {2025},
author = {Zhang, H and Liu, M and Liu, W and Shi, W and Li, S and Zhang, J and Wang, X},
title = {Performance and energy optimization of ternary optical computers based on tandem queuing system.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {15037},
pmid = {40301430},
issn = {2045-2322},
abstract = {As an emerging computer technology with numerous bits, bit-wise allocation, and extensive parallelism, the ternary optical computer (TOC) will play an important role in platforms such as cloud computing and big data. Previous studies on TOC in handling computational request tasks have mainly focused on performance enhancement while ignoring the impact of performance enhancement on power consumption. The main objective of this study is to investigate the optimization trade-off between performance and energy consumption in TOC systems. To this end, the service model of the TOC is constructed by introducing the M/M/1 and M/M/c models in queuing theory, combined with the framework of the tandem queueing system, and the optimization problem is studied by adjusting the processor partitioning strategy and the number of small TOC (STOC) in the service process. The results show that the value of increasing active STOCs is prominent when system performance significantly depends on response time. However, marginal gains decrease as the number of STOCs grows, accompanied by rising energy costs. Based on these findings, this paper constructs a bi-objective optimization model using response time and energy consumption. It proposes an optimization strategy to achieve bi-objective optimization of performance and energy consumption for TOC by identifying the optimal partitioning strategy and the number of active small optical processors for different load conditions.},
}
RevDate: 2025-04-29
CmpDate: 2025-04-30
Spatiotemporal dataset of dengue influencing factors in Brazil based on geospatial big data cloud computing.
Scientific data, 12(1):712.
Dengue fever has been spreading rapidly worldwide, with a notably high prevalence in South American countries such as Brazil. Its transmission dynamics are governed by the vector population dynamics and the interactions among humans, vectors, and pathogens, which are further shaped by environmental factors. Calculating these environmental indicators is challenging due to the limited spatial coverage of weather station observations and the time-consuming processes involved in downloading and processing local data, such as satellite imagery. This issue is exacerbated in large-scale studies, making it difficult to develop comprehensive and publicly accessible datasets of disease-influencing factors. Addressing this challenge necessitates the efficient data integration methods and the assembly of multi-factorial datasets to aid public health authorities in understanding dengue transmission mechanisms and improving risk prediction models. In response, we developed a population-weighted dataset of 12 dengue risk factors, covering 558 microregions in Brazil over 1252 epidemiological weeks from 2001 to 2024. This dataset and the associated methodology streamline data processing for researchers and can be adapted for other vector-borne disease studies.
Additional Links: PMID-40301332
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40301332,
year = {2025},
author = {Zhu, Q and Li, Z and Dong, J and Fu, P and Cheng, Q and Cai, J and Gurgel, H and Yang, L},
title = {Spatiotemporal dataset of dengue influencing factors in Brazil based on geospatial big data cloud computing.},
journal = {Scientific data},
volume = {12},
number = {1},
pages = {712},
pmid = {40301332},
issn = {2052-4463},
mesh = {Brazil/epidemiology ; *Dengue/epidemiology/transmission ; Humans ; *Big Data ; *Cloud Computing ; Spatio-Temporal Analysis ; Risk Factors ; Animals ; },
abstract = {Dengue fever has been spreading rapidly worldwide, with a notably high prevalence in South American countries such as Brazil. Its transmission dynamics are governed by the vector population dynamics and the interactions among humans, vectors, and pathogens, which are further shaped by environmental factors. Calculating these environmental indicators is challenging due to the limited spatial coverage of weather station observations and the time-consuming processes involved in downloading and processing local data, such as satellite imagery. This issue is exacerbated in large-scale studies, making it difficult to develop comprehensive and publicly accessible datasets of disease-influencing factors. Addressing this challenge necessitates the efficient data integration methods and the assembly of multi-factorial datasets to aid public health authorities in understanding dengue transmission mechanisms and improving risk prediction models. In response, we developed a population-weighted dataset of 12 dengue risk factors, covering 558 microregions in Brazil over 1252 epidemiological weeks from 2001 to 2024. This dataset and the associated methodology streamline data processing for researchers and can be adapted for other vector-borne disease studies.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Brazil/epidemiology
*Dengue/epidemiology/transmission
Humans
*Big Data
*Cloud Computing
Spatio-Temporal Analysis
Risk Factors
Animals
RevDate: 2025-04-30
Privacy-Preserving Multi-User Graph Intersection Scheme for Wireless Communications in Cloud-Assisted Internet of Things.
Sensors (Basel, Switzerland), 25(6):.
Cloud-assisted Internet of Things (IoT) has become the core infrastructure of smart society since it solves the computational power, storage, and collaboration bottlenecks of traditional IoT through resource decoupling and capability complementarity. The development of a graph database and cloud-assisted IoT promotes the research of privacy preserving graph computation. We propose a secure graph intersection scheme that supports multi-user intersection queries in cloud-assisted IoT in this article. The existing work on graph encryption for intersection queries is designed for a single user, which will bring high computational and communication costs for data owners, or cause the risk of secret key leaking if directly applied to multi-user scenarios. To solve these problems, we employ the proxy re-encryption (PRE) that transforms the encrypted graph data with a re-encryption key to enable the graph intersection results to be decrypted by an authorized IoT user using their own private key, while data owners only encrypt their graph data on IoT devices once. In our scheme, different IoT users can query for the intersection of graphs flexibly, while data owners do not need to perform encryption operations every time an IoT user makes a query. Theoretical analysis and simulation results demonstrate that the graph intersection scheme in this paper is secure and practical.
Additional Links: PMID-40292999
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40292999,
year = {2025},
author = {Yang, S},
title = {Privacy-Preserving Multi-User Graph Intersection Scheme for Wireless Communications in Cloud-Assisted Internet of Things.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {6},
pages = {},
pmid = {40292999},
issn = {1424-8220},
abstract = {Cloud-assisted Internet of Things (IoT) has become the core infrastructure of smart society since it solves the computational power, storage, and collaboration bottlenecks of traditional IoT through resource decoupling and capability complementarity. The development of a graph database and cloud-assisted IoT promotes the research of privacy preserving graph computation. We propose a secure graph intersection scheme that supports multi-user intersection queries in cloud-assisted IoT in this article. The existing work on graph encryption for intersection queries is designed for a single user, which will bring high computational and communication costs for data owners, or cause the risk of secret key leaking if directly applied to multi-user scenarios. To solve these problems, we employ the proxy re-encryption (PRE) that transforms the encrypted graph data with a re-encryption key to enable the graph intersection results to be decrypted by an authorized IoT user using their own private key, while data owners only encrypt their graph data on IoT devices once. In our scheme, different IoT users can query for the intersection of graphs flexibly, while data owners do not need to perform encryption operations every time an IoT user makes a query. Theoretical analysis and simulation results demonstrate that the graph intersection scheme in this paper is secure and practical.},
}
RevDate: 2025-04-30
From Sensors to Data Intelligence: Leveraging IoT, Cloud, and Edge Computing with AI.
Sensors (Basel, Switzerland), 25(6):.
The exponential growth of connected devices and sensor networks has revolutionized data collection and monitoring across industries, from healthcare to smart cities. However, the true value of these systems lies not merely in gathering data but in transforming it into actionable intelligence. The integration of IoT, cloud computing, edge computing, and AI offers a robust pathway to achieve this transformation, enabling real-time decision-making and predictive insights. This paper explores innovative approaches to combine these technologies, emphasizing their role in enabling real-time decision-making, predictive analytics, and low-latency data processing. This work analyzes several integration approaches among IoT, cloud/edge computing, and AI through examples and applications, highlighting challenges and approaches to seamlessly integrate these techniques to achieve pervasive environmental intelligence. The findings contribute to advancing pervasive environmental intelligence, offering a roadmap for building smarter, more sustainable infrastructure.
Additional Links: PMID-40292910
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40292910,
year = {2025},
author = {Ficili, I and Giacobbe, M and Tricomi, G and Puliafito, A},
title = {From Sensors to Data Intelligence: Leveraging IoT, Cloud, and Edge Computing with AI.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {6},
pages = {},
pmid = {40292910},
issn = {1424-8220},
abstract = {The exponential growth of connected devices and sensor networks has revolutionized data collection and monitoring across industries, from healthcare to smart cities. However, the true value of these systems lies not merely in gathering data but in transforming it into actionable intelligence. The integration of IoT, cloud computing, edge computing, and AI offers a robust pathway to achieve this transformation, enabling real-time decision-making and predictive insights. This paper explores innovative approaches to combine these technologies, emphasizing their role in enabling real-time decision-making, predictive analytics, and low-latency data processing. This work analyzes several integration approaches among IoT, cloud/edge computing, and AI through examples and applications, highlighting challenges and approaches to seamlessly integrate these techniques to achieve pervasive environmental intelligence. The findings contribute to advancing pervasive environmental intelligence, offering a roadmap for building smarter, more sustainable infrastructure.},
}
RevDate: 2025-04-30
CmpDate: 2025-04-28
Real-Time Acoustic Scene Recognition for Elderly Daily Routines Using Edge-Based Deep Learning.
Sensors (Basel, Switzerland), 25(6):.
The demand for intelligent monitoring systems tailored to elderly living environments is rapidly increasing worldwide with population aging. Traditional acoustic scene monitoring systems that rely on cloud computing are limited by data transmission delays and privacy concerns. Hence, this study proposes an acoustic scene recognition system that integrates edge computing with deep learning to enable real-time monitoring of elderly individuals' daily activities. The system consists of low-power edge devices equipped with multiple microphones, portable wearable components, and compact power modules, ensuring its seamless integration into the daily lives of the elderly. We developed four deep learning models-convolutional neural network, long short-term memory, bidirectional long short-term memory, and deep neural network-and used model quantization techniques to reduce the computational complexity and memory usage, thereby optimizing them to meet edge device constraints. The CNN model demonstrated superior performance compared to the other models, achieving 98.5% accuracy, an inference time of 2.4 ms, and low memory requirements (25.63 KB allocated for Flash and 5.15 KB for RAM). This architecture provides an efficient, reliable, and user-friendly solution for real-time acoustic scene monitoring in elderly care.
Additional Links: PMID-40292891
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40292891,
year = {2025},
author = {Yang, H and Dong, R and Guo, R and Che, Y and Xie, X and Yang, J and Zhang, J},
title = {Real-Time Acoustic Scene Recognition for Elderly Daily Routines Using Edge-Based Deep Learning.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {6},
pages = {},
pmid = {40292891},
issn = {1424-8220},
support = {202301BD070001-114//Yunnan Agricultural University/ ; 2024-55 and 2021YLKC126//Yunnan Agricultural University/ ; },
mesh = {Humans ; *Deep Learning ; Aged ; *Acoustics ; Neural Networks, Computer ; Activities of Daily Living ; Wearable Electronic Devices ; },
abstract = {The demand for intelligent monitoring systems tailored to elderly living environments is rapidly increasing worldwide with population aging. Traditional acoustic scene monitoring systems that rely on cloud computing are limited by data transmission delays and privacy concerns. Hence, this study proposes an acoustic scene recognition system that integrates edge computing with deep learning to enable real-time monitoring of elderly individuals' daily activities. The system consists of low-power edge devices equipped with multiple microphones, portable wearable components, and compact power modules, ensuring its seamless integration into the daily lives of the elderly. We developed four deep learning models-convolutional neural network, long short-term memory, bidirectional long short-term memory, and deep neural network-and used model quantization techniques to reduce the computational complexity and memory usage, thereby optimizing them to meet edge device constraints. The CNN model demonstrated superior performance compared to the other models, achieving 98.5% accuracy, an inference time of 2.4 ms, and low memory requirements (25.63 KB allocated for Flash and 5.15 KB for RAM). This architecture provides an efficient, reliable, and user-friendly solution for real-time acoustic scene monitoring in elderly care.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Deep Learning
Aged
*Acoustics
Neural Networks, Computer
Activities of Daily Living
Wearable Electronic Devices
RevDate: 2025-04-30
Application of Cloud Simulation Techniques for Robotic Software Validation.
Sensors (Basel, Switzerland), 25(6):.
Continuous Integration and Continuous Deployment are known methodologies for software development that increase the overall quality of the development process. Several robotic software repositories make use of CI/CD tools as an aid to development. However, very few CI pipelines take advantage of using cloud computing to run simulations. Here, a CI pipeline is proposed that takes advantage of such features, applied to the development of ATOM, a ROS-based application capable of carrying out the calibration of generalized robotic systems. The proposed pipeline uses GitHub Actions as a CI/CD engine, AWS RoboMaker as a service for running simulations on the cloud and Rigel as a tool to both containerize ATOM and execute the tests. In addition, a static analysis and unit testing component is implemented with the use of Codacy. The creation of the pipeline was successful, and it was concluded that it constitutes a valuable tool for the development of ATOM and a blueprint for the creation of similar pipelines for other robotic systems.
Additional Links: PMID-40292792
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40292792,
year = {2025},
author = {Vieira, D and Oliveira, M and Arrais, R and Melo, P},
title = {Application of Cloud Simulation Techniques for Robotic Software Validation.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {6},
pages = {},
pmid = {40292792},
issn = {1424-8220},
support = {00127-IEETA//This work is funded by FCT (Foundation for Science and Technology) under unit 00127-IEETA./ ; 101120406.//This project has received funding from the European Union's Horizon Europe research and innovation programme under the Grant Agreement 101120406./ ; },
abstract = {Continuous Integration and Continuous Deployment are known methodologies for software development that increase the overall quality of the development process. Several robotic software repositories make use of CI/CD tools as an aid to development. However, very few CI pipelines take advantage of using cloud computing to run simulations. Here, a CI pipeline is proposed that takes advantage of such features, applied to the development of ATOM, a ROS-based application capable of carrying out the calibration of generalized robotic systems. The proposed pipeline uses GitHub Actions as a CI/CD engine, AWS RoboMaker as a service for running simulations on the cloud and Rigel as a tool to both containerize ATOM and execute the tests. In addition, a static analysis and unit testing component is implemented with the use of Codacy. The creation of the pipeline was successful, and it was concluded that it constitutes a valuable tool for the development of ATOM and a blueprint for the creation of similar pipelines for other robotic systems.},
}
RevDate: 2025-04-30
Design and Implementation of ESP32-Based Edge Computing for Object Detection.
Sensors (Basel, Switzerland), 25(6):.
This paper explores the application of the ESP32 microcontroller in edge computing, focusing on the design and implementation of an edge server system to evaluate performance improvements achieved by integrating edge and cloud computing. Responding to the growing need to reduce cloud burdens and latency, this research develops an edge server, detailing the ESP32 hardware architecture, software environment, communication protocols, and server framework. A complementary cloud server software framework is also designed to support edge processing. A deep learning model for object recognition is selected, trained, and deployed on the edge server. Performance evaluation metrics, classification time, MQTT (Message Queuing Telemetry Transport) transmission time, and data from various MQTT brokers are used to assess system performance, with particular attention to the impact of image size adjustments. Experimental results demonstrate that the edge server significantly reduces bandwidth usage and latency, effectively alleviating the load on the cloud server. This study discusses the system's strengths and limitations, interprets experimental findings, and suggests potential improvements and future applications. By integrating AI and IoT, the edge server design and object recognition system demonstrates the benefits of localized edge processing in enhancing efficiency and reducing cloud dependency.
Additional Links: PMID-40292726
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40292726,
year = {2025},
author = {Chang, YH and Wu, FC and Lin, HW},
title = {Design and Implementation of ESP32-Based Edge Computing for Object Detection.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {6},
pages = {},
pmid = {40292726},
issn = {1424-8220},
abstract = {This paper explores the application of the ESP32 microcontroller in edge computing, focusing on the design and implementation of an edge server system to evaluate performance improvements achieved by integrating edge and cloud computing. Responding to the growing need to reduce cloud burdens and latency, this research develops an edge server, detailing the ESP32 hardware architecture, software environment, communication protocols, and server framework. A complementary cloud server software framework is also designed to support edge processing. A deep learning model for object recognition is selected, trained, and deployed on the edge server. Performance evaluation metrics, classification time, MQTT (Message Queuing Telemetry Transport) transmission time, and data from various MQTT brokers are used to assess system performance, with particular attention to the impact of image size adjustments. Experimental results demonstrate that the edge server significantly reduces bandwidth usage and latency, effectively alleviating the load on the cloud server. This study discusses the system's strengths and limitations, interprets experimental findings, and suggests potential improvements and future applications. By integrating AI and IoT, the edge server design and object recognition system demonstrates the benefits of localized edge processing in enhancing efficiency and reducing cloud dependency.},
}
RevDate: 2025-04-30
Modified grey wolf optimization for energy-efficient internet of things task scheduling in fog computing.
Scientific reports, 15(1):14730.
Fog-cloud computing has emerged as a transformative paradigm for managing the growing demands of Internet of Things (IoT) applications, where efficient task scheduling is crucial for optimizing system performance. However, existing task scheduling methods often struggle to balance makespan minimization and energy efficiency in dynamic and resource-constrained fog-cloud environments. Addressing this gap, this paper introduces a novel Task Scheduling algorithm based on a modified Grey Wolf Optimization approach (TS-GWO), tailored specifically for IoT requests in fog-cloud systems. The proposed TS-GWO incorporates innovative operators to enhance exploration and exploitation capabilities, enabling the identification of optimal scheduling solutions. Extensive evaluations using both synthetic and real-world datasets, such as NASA Ames iPSC and HPC2N workloads, demonstrate the superior performance of TS-GWO over established metaheuristic methods. Notably, TS-GWO achieves improvements in makespan by up to 46.15% and reductions in energy consumption by up to 28.57%. These results highlight the potential of TS-GWO to effectively address task scheduling challenges in fog-cloud environments, paving the way for its application in broader optimization tasks.
Additional Links: PMID-40289232
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40289232,
year = {2025},
author = {Alsadie, D and Alsulami, M},
title = {Modified grey wolf optimization for energy-efficient internet of things task scheduling in fog computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {14730},
pmid = {40289232},
issn = {2045-2322},
abstract = {Fog-cloud computing has emerged as a transformative paradigm for managing the growing demands of Internet of Things (IoT) applications, where efficient task scheduling is crucial for optimizing system performance. However, existing task scheduling methods often struggle to balance makespan minimization and energy efficiency in dynamic and resource-constrained fog-cloud environments. Addressing this gap, this paper introduces a novel Task Scheduling algorithm based on a modified Grey Wolf Optimization approach (TS-GWO), tailored specifically for IoT requests in fog-cloud systems. The proposed TS-GWO incorporates innovative operators to enhance exploration and exploitation capabilities, enabling the identification of optimal scheduling solutions. Extensive evaluations using both synthetic and real-world datasets, such as NASA Ames iPSC and HPC2N workloads, demonstrate the superior performance of TS-GWO over established metaheuristic methods. Notably, TS-GWO achieves improvements in makespan by up to 46.15% and reductions in energy consumption by up to 28.57%. These results highlight the potential of TS-GWO to effectively address task scheduling challenges in fog-cloud environments, paving the way for its application in broader optimization tasks.},
}
RevDate: 2025-04-28
CmpDate: 2025-04-26
MODIS-Based Spatiotemporal Inversion and Driving-Factor Analysis of Cloud-Free Vegetation Cover in Xinjiang from 2000 to 2024.
Sensors (Basel, Switzerland), 25(8):.
The Xinjiang Uygur Autonomous Region, characterized by its complex and fragile ecosystems, has faced ongoing ecological degradation in recent years, challenging national ecological security and sustainable development. To promote the sustainable development of regional ecological and landscape conservation, this study investigates Fractional Vegetation Cover (FVC) dynamics in Xinjiang. Existing studies often lack recent data and exhibit limitations in the selection of driving factors. To mitigate the issues, this study utilized Google Earth Engine (GEE) and cloud-free MOD13A2.061 data to systematically generate comprehensive FVC products for Xinjiang from 2000 to 2024. Additionally, a comprehensive and quantitative analysis of up to 15 potential driving factors was conducted, providing an updated and more robust understanding of vegetation dynamics in the region. This study integrated advanced methodologies, including spatiotemporal statistical analysis, optimized spatial scaling, trend analysis, and Geographical Detector (GeoDetector). Notably, we propose a novel approach combining a Theil-Sen Median trend analysis with a Hurst index to predict future vegetation trends, which to some extent enhances the persuasiveness of the Hurst index alone. The following are the key experimental results: (1) Over the 25-year study period, Xinjiang's vegetation cover exhibited a pronounced north-south gradient, with significantly higher FVC in the northern regions compared to the southern regions. (2) A time series analysis revealed an overall fluctuating upward trend in the FVC, accompanied by increasing volatility and decreasing stability over time. (3) Identification of 15 km as the optimal spatial scale for FVC analysis through spatial statistical analysis using Moran's I and the coefficient of variation. (4) Land use type, vegetation type, and soil type emerged as critical factors, with each contributing over 20% to the explanatory power of FVC variations. (5) To elucidate spatial heterogeneity mechanisms, this study conducted ecological subzone-based analyses of vegetation dynamics and drivers.
Additional Links: PMID-40285084
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40285084,
year = {2025},
author = {Yang, H and Xiong, M and Yao, Y},
title = {MODIS-Based Spatiotemporal Inversion and Driving-Factor Analysis of Cloud-Free Vegetation Cover in Xinjiang from 2000 to 2024.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {8},
pages = {},
pmid = {40285084},
issn = {1424-8220},
support = {42401534//the National Natural Science Foundation of China/ ; 2024KTSCX052//Research Projects of Department of Education of Guangdong Province/ ; 6023310030K//Shenzhen Polytechnic University Research Fund/ ; 6023240118K//Shenzhen Polytechnic University Research Fund/ ; 6024310045K//Shenzhen Polytechnic University Research Fund/ ; },
mesh = {China ; Ecosystem ; Spatio-Temporal Analysis ; Conservation of Natural Resources ; *Environmental Monitoring/methods ; Remote Sensing Technology ; },
abstract = {The Xinjiang Uygur Autonomous Region, characterized by its complex and fragile ecosystems, has faced ongoing ecological degradation in recent years, challenging national ecological security and sustainable development. To promote the sustainable development of regional ecological and landscape conservation, this study investigates Fractional Vegetation Cover (FVC) dynamics in Xinjiang. Existing studies often lack recent data and exhibit limitations in the selection of driving factors. To mitigate the issues, this study utilized Google Earth Engine (GEE) and cloud-free MOD13A2.061 data to systematically generate comprehensive FVC products for Xinjiang from 2000 to 2024. Additionally, a comprehensive and quantitative analysis of up to 15 potential driving factors was conducted, providing an updated and more robust understanding of vegetation dynamics in the region. This study integrated advanced methodologies, including spatiotemporal statistical analysis, optimized spatial scaling, trend analysis, and Geographical Detector (GeoDetector). Notably, we propose a novel approach combining a Theil-Sen Median trend analysis with a Hurst index to predict future vegetation trends, which to some extent enhances the persuasiveness of the Hurst index alone. The following are the key experimental results: (1) Over the 25-year study period, Xinjiang's vegetation cover exhibited a pronounced north-south gradient, with significantly higher FVC in the northern regions compared to the southern regions. (2) A time series analysis revealed an overall fluctuating upward trend in the FVC, accompanied by increasing volatility and decreasing stability over time. (3) Identification of 15 km as the optimal spatial scale for FVC analysis through spatial statistical analysis using Moran's I and the coefficient of variation. (4) Land use type, vegetation type, and soil type emerged as critical factors, with each contributing over 20% to the explanatory power of FVC variations. (5) To elucidate spatial heterogeneity mechanisms, this study conducted ecological subzone-based analyses of vegetation dynamics and drivers.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
China
Ecosystem
Spatio-Temporal Analysis
Conservation of Natural Resources
*Environmental Monitoring/methods
Remote Sensing Technology
RevDate: 2025-04-28
CmpDate: 2025-04-26
Challenges and Solution Directions for the Integration of Smart Information Systems in the Agri-Food Sector.
Sensors (Basel, Switzerland), 25(8):.
Traditional farming has evolved from standalone computing systems to smart farming, driven by advancements in digitalization. This has led to the proliferation of diverse information systems (IS), such as IoT and sensor systems, decision support systems, and farm management information systems (FMISs). These systems often operate in isolation, limiting their overall impact. The integration of IS into connected smart systems is widely addressed as a key driver to tackle these issues. However, it is a complex, multi-faceted issue that is not easily achievable. Previous studies have offered valuable insights, but they often focus on specific cases, such as individual IS and certain integration aspects, lacking a comprehensive overview of various integration dimensions. This systematic review of 74 scientific papers on IS integration addresses this gap by providing an overview of the digital technologies involved, integration levels and types, barriers hindering integration, and available approaches to overcoming these challenges. The findings indicate that integration primarily relies on a point-to-point approach, followed by cloud-based integration. Enterprise service bus, hub-and-spoke, and semantic web approaches are mentioned less frequently but are gaining interest. The study identifies and discusses 27 integration challenges into three main areas: organizational, technological, and data governance-related challenges. Technologies such as blockchain, data spaces, AI, edge computing and microservices, and service-oriented architecture methods are addressed as solutions for data governance and interoperability issues. The insights from the study can help enhance interoperability, leading to data-driven smart farming that increases food production, mitigates climate change, and optimizes resource usage.
Additional Links: PMID-40285052
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40285052,
year = {2025},
author = {Ahoa, E and Kassahun, A and Verdouw, C and Tekinerdogan, B},
title = {Challenges and Solution Directions for the Integration of Smart Information Systems in the Agri-Food Sector.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {8},
pages = {},
pmid = {40285052},
issn = {1424-8220},
mesh = {*Agriculture/methods ; *Information Systems ; Humans ; Artificial Intelligence ; },
abstract = {Traditional farming has evolved from standalone computing systems to smart farming, driven by advancements in digitalization. This has led to the proliferation of diverse information systems (IS), such as IoT and sensor systems, decision support systems, and farm management information systems (FMISs). These systems often operate in isolation, limiting their overall impact. The integration of IS into connected smart systems is widely addressed as a key driver to tackle these issues. However, it is a complex, multi-faceted issue that is not easily achievable. Previous studies have offered valuable insights, but they often focus on specific cases, such as individual IS and certain integration aspects, lacking a comprehensive overview of various integration dimensions. This systematic review of 74 scientific papers on IS integration addresses this gap by providing an overview of the digital technologies involved, integration levels and types, barriers hindering integration, and available approaches to overcoming these challenges. The findings indicate that integration primarily relies on a point-to-point approach, followed by cloud-based integration. Enterprise service bus, hub-and-spoke, and semantic web approaches are mentioned less frequently but are gaining interest. The study identifies and discusses 27 integration challenges into three main areas: organizational, technological, and data governance-related challenges. Technologies such as blockchain, data spaces, AI, edge computing and microservices, and service-oriented architecture methods are addressed as solutions for data governance and interoperability issues. The insights from the study can help enhance interoperability, leading to data-driven smart farming that increases food production, mitigates climate change, and optimizes resource usage.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Agriculture/methods
*Information Systems
Humans
Artificial Intelligence
RevDate: 2025-04-25
Cataract Surgery Registries: History, Utility, Barriers and Future.
Journal of cataract and refractive surgery pii:02158034-990000000-00604 [Epub ahead of print].
Cataract surgery databases have become indispensable tools in ophthalmology, providing extensive data that enhance surgical practices and patient care. This narrative review traces the development of these databases, and summarises some of the significant contributions of these databases, such as improved surgical outcomes, informed clinical guidelines, and enhanced quality assurance. There are significant barriers to establishing and maintaining cataract surgery databases, including data protection and management challenges, economic constraints, technological hurdles, and ethical considerations. These obstacles complicate efforts to ensure data accuracy, standardisation, and interoperability across diverse healthcare settings. Large language models, and artificial intelligence has potential in streamlining data collection and analysis for the future of these databases. Innovations like blockchain for data security and cloud computing for scalability are examined as solutions to current limitations. Addressing the existing challenges and leveraging technological advancements will be crucial for the continued evolution and utility of these databases, ensuring they remain pivotal in advancing cataract surgery and patient care.
Additional Links: PMID-40277407
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40277407,
year = {2025},
author = {Pietris, J and Bahrami, B and LaHood, B and Goggin, M and Chan, WO},
title = {Cataract Surgery Registries: History, Utility, Barriers and Future.},
journal = {Journal of cataract and refractive surgery},
volume = {},
number = {},
pages = {},
doi = {10.1097/j.jcrs.0000000000001680},
pmid = {40277407},
issn = {1873-4502},
abstract = {Cataract surgery databases have become indispensable tools in ophthalmology, providing extensive data that enhance surgical practices and patient care. This narrative review traces the development of these databases, and summarises some of the significant contributions of these databases, such as improved surgical outcomes, informed clinical guidelines, and enhanced quality assurance. There are significant barriers to establishing and maintaining cataract surgery databases, including data protection and management challenges, economic constraints, technological hurdles, and ethical considerations. These obstacles complicate efforts to ensure data accuracy, standardisation, and interoperability across diverse healthcare settings. Large language models, and artificial intelligence has potential in streamlining data collection and analysis for the future of these databases. Innovations like blockchain for data security and cloud computing for scalability are examined as solutions to current limitations. Addressing the existing challenges and leveraging technological advancements will be crucial for the continued evolution and utility of these databases, ensuring they remain pivotal in advancing cataract surgery and patient care.},
}
RevDate: 2025-04-24
Bakta Web - rapid and standardized genome annotation on scalable infrastructures.
Nucleic acids research pii:8118971 [Epub ahead of print].
The Bakta command line application is widely used and one of the most established tools for bacterial genome annotation. It balances comprehensive annotation with computational efficiency via alignment-free sequence identifications. However, the usage of command line software tools and the interpretation of result files in various formats might be challenging and pose technical barriers. Here, we present the recent updates on the Bakta web server, a user-friendly web interface for conducting and visualizing annotations using Bakta without requiring command line expertise or local computing resources. Key features include interactive visualizations through circular genome plots, linear genome browsers, and searchable data tables facilitating the interpretation of complex annotation results. The web server generates standard bioinformatics outputs (GFF3, GenBank, EMBL) and annotates diverse genomic features, including coding sequences, non-coding RNAs, small open reading frames (sORFs), and many more. The development of an auto-scaling cloud-native architecture and improved database integration led to substantially faster processing times and higher throughputs. The system supports FAIR principles via extensive cross-reference links to external databases, including RefSeq, UniRef, and Gene Ontology. Also, novel features have been implemented to foster sharing and collaborative interpretation of results. The web server is freely available at https://bakta.computational.bio.
Additional Links: PMID-40271661
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40271661,
year = {2025},
author = {Beyvers, S and Jelonek, L and Goesmann, A and Schwengers, O},
title = {Bakta Web - rapid and standardized genome annotation on scalable infrastructures.},
journal = {Nucleic acids research},
volume = {},
number = {},
pages = {},
doi = {10.1093/nar/gkaf335},
pmid = {40271661},
issn = {1362-4962},
support = {FAIRDS08//Federal Ministry of Education and Research/ ; 031L0288B//Deep-Legion/ ; W-de.NBI-010//German Network for Bioinformatics Infrastructure/ ; 031A533//BiGi Service Center/ ; //Justus Liebig University Giessen/ ; },
abstract = {The Bakta command line application is widely used and one of the most established tools for bacterial genome annotation. It balances comprehensive annotation with computational efficiency via alignment-free sequence identifications. However, the usage of command line software tools and the interpretation of result files in various formats might be challenging and pose technical barriers. Here, we present the recent updates on the Bakta web server, a user-friendly web interface for conducting and visualizing annotations using Bakta without requiring command line expertise or local computing resources. Key features include interactive visualizations through circular genome plots, linear genome browsers, and searchable data tables facilitating the interpretation of complex annotation results. The web server generates standard bioinformatics outputs (GFF3, GenBank, EMBL) and annotates diverse genomic features, including coding sequences, non-coding RNAs, small open reading frames (sORFs), and many more. The development of an auto-scaling cloud-native architecture and improved database integration led to substantially faster processing times and higher throughputs. The system supports FAIR principles via extensive cross-reference links to external databases, including RefSeq, UniRef, and Gene Ontology. Also, novel features have been implemented to foster sharing and collaborative interpretation of results. The web server is freely available at https://bakta.computational.bio.},
}
RevDate: 2025-05-01
CmpDate: 2025-05-01
Improved Pine Wood Nematode Disease Diagnosis System Based on Deep Learning.
Plant disease, 109(4):862-874.
Pine wilt disease caused by the pine wood nematode, Bursaphelenchus xylophilus, has profound implications for global forestry ecology. Conventional PCR methods need long operating time and are complicated to perform. The need for rapid and effective detection methodologies to curtail its dissemination and reduce pine felling has become more apparent. This study initially proposed the use of fluorescence recognition for the detection of pine wood nematode disease, accompanied by the development of a dedicated fluorescence detection system based on deep learning. This system possesses the capability to perform excitation, detection, as well as data analysis and transmission of test samples. In exploring fluorescence recognition methodologies, the efficacy of five conventional machine learning algorithms was juxtaposed with that of You Only Look Once version 5 and You Only Look Once version 10, both in the pre- and post-image processing stages. Moreover, enhancements were introduced to the You Only Look Once version 5 model. The network's aptitude for discerning features across varied scales and resolutions was bolstered through the integration of Res2Net. Meanwhile, a SimAM attention mechanism was incorporated into the backbone network, and the original PANet structure was replaced by the Bi-FPN within the Head network to amplify feature fusion capabilities. The enhanced YOLOv5 model demonstrates significant improvements, particularly in the recognition of large-size images, achieving an accuracy improvement of 39.98%. The research presents a novel detection system for pine nematode detection, capable of detecting samples with DNA concentrations as low as 1 fg/μl within 20 min. This system integrates detection instruments, laptops, cloud computing, and smartphones, holding tremendous potential for field application.
Additional Links: PMID-40267359
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40267359,
year = {2025},
author = {Xiao, J and Wu, J and Liu, D and Li, X and Liu, J and Su, X and Wang, Y},
title = {Improved Pine Wood Nematode Disease Diagnosis System Based on Deep Learning.},
journal = {Plant disease},
volume = {109},
number = {4},
pages = {862-874},
doi = {10.1094/PDIS-06-24-1221-RE},
pmid = {40267359},
issn = {0191-2917},
mesh = {*Deep Learning ; Animals ; *Pinus/parasitology ; *Plant Diseases/parasitology ; *Nematoda/isolation & purification/physiology ; *Tylenchida/isolation & purification ; },
abstract = {Pine wilt disease caused by the pine wood nematode, Bursaphelenchus xylophilus, has profound implications for global forestry ecology. Conventional PCR methods need long operating time and are complicated to perform. The need for rapid and effective detection methodologies to curtail its dissemination and reduce pine felling has become more apparent. This study initially proposed the use of fluorescence recognition for the detection of pine wood nematode disease, accompanied by the development of a dedicated fluorescence detection system based on deep learning. This system possesses the capability to perform excitation, detection, as well as data analysis and transmission of test samples. In exploring fluorescence recognition methodologies, the efficacy of five conventional machine learning algorithms was juxtaposed with that of You Only Look Once version 5 and You Only Look Once version 10, both in the pre- and post-image processing stages. Moreover, enhancements were introduced to the You Only Look Once version 5 model. The network's aptitude for discerning features across varied scales and resolutions was bolstered through the integration of Res2Net. Meanwhile, a SimAM attention mechanism was incorporated into the backbone network, and the original PANet structure was replaced by the Bi-FPN within the Head network to amplify feature fusion capabilities. The enhanced YOLOv5 model demonstrates significant improvements, particularly in the recognition of large-size images, achieving an accuracy improvement of 39.98%. The research presents a novel detection system for pine nematode detection, capable of detecting samples with DNA concentrations as low as 1 fg/μl within 20 min. This system integrates detection instruments, laptops, cloud computing, and smartphones, holding tremendous potential for field application.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
Animals
*Pinus/parasitology
*Plant Diseases/parasitology
*Nematoda/isolation & purification/physiology
*Tylenchida/isolation & purification
RevDate: 2025-04-23
Wafer-Scale Nanoprinting of 3D Interconnects beyond Cu.
ACS nano [Epub ahead of print].
Cloud operations and services, as well as many other modern computing tasks, require hardware that is run by very densely packed integrated circuits (ICs) and heterogenous ICs. The performance of these ICs is determined by the stability and properties of the interconnects between the semiconductor devices and ICs. Although some ICs with 3D interconnects are commercially available, there has been limited progress on 3D printing utilizing emerging nanomaterials. Moreover, laying out reliable 3D metal interconnects in ICs with the appropriate electrical and physical properties remains challenging. Here, we propose high-throughput 3D interconnection with nanoscale precision by leveraging lines of forces. We successfully nanoprinted multiscale and multilevel Au, Ir, and Ru 3D interconnects on the wafer scale in non-vacuum conditions using a pulsed electric field. The ON phase of the pulsed field initiates in situ printing of nanoparticle (NP) deposition into interconnects, whereas the OFF phase allows the gas flow to evenly distribute the NPs over an entire wafer. Characterization of the 3D interconnects confirms their excellent uniformity, electrical properties, and free-form geometries, far exceeding those of any 3D-printed interconnects. Importantly, their measured resistances approach the theoretical values calculated here. The results demonstrate that 3D nanoprinting can be used to fabricate thinner and faster interconnects, which can enhance the performance of dense ICs; therefore, 3D nanoprinting can complement lithography and resolve the challenges encountered in the fabrication of critical device features.
Additional Links: PMID-40265605
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40265605,
year = {2025},
author = {Yin, Y and Liu, B and Zhang, Y and Han, Y and Liu, Q and Feng, J},
title = {Wafer-Scale Nanoprinting of 3D Interconnects beyond Cu.},
journal = {ACS nano},
volume = {},
number = {},
pages = {},
doi = {10.1021/acsnano.5c00720},
pmid = {40265605},
issn = {1936-086X},
abstract = {Cloud operations and services, as well as many other modern computing tasks, require hardware that is run by very densely packed integrated circuits (ICs) and heterogenous ICs. The performance of these ICs is determined by the stability and properties of the interconnects between the semiconductor devices and ICs. Although some ICs with 3D interconnects are commercially available, there has been limited progress on 3D printing utilizing emerging nanomaterials. Moreover, laying out reliable 3D metal interconnects in ICs with the appropriate electrical and physical properties remains challenging. Here, we propose high-throughput 3D interconnection with nanoscale precision by leveraging lines of forces. We successfully nanoprinted multiscale and multilevel Au, Ir, and Ru 3D interconnects on the wafer scale in non-vacuum conditions using a pulsed electric field. The ON phase of the pulsed field initiates in situ printing of nanoparticle (NP) deposition into interconnects, whereas the OFF phase allows the gas flow to evenly distribute the NPs over an entire wafer. Characterization of the 3D interconnects confirms their excellent uniformity, electrical properties, and free-form geometries, far exceeding those of any 3D-printed interconnects. Importantly, their measured resistances approach the theoretical values calculated here. The results demonstrate that 3D nanoprinting can be used to fabricate thinner and faster interconnects, which can enhance the performance of dense ICs; therefore, 3D nanoprinting can complement lithography and resolve the challenges encountered in the fabrication of critical device features.},
}
RevDate: 2025-04-23
Transforming Medical Imaging: The Role of Artificial Intelligence Integration in PACS for Enhanced Diagnostic Accuracy and Workflow Efficiency.
Current medical imaging pii:CMIR-EPUB-147831 [Epub ahead of print].
INTRODUCTION: To examine the integration of artificial intelligence (AI) into Picture Archiving and Communication Systems (PACS) and assess its impact on medical imaging, diagnostic workflows, and patient outcomes. This review explores the technological evolution, key advancements, and challenges associated with AI-enhanced PACS in healthcare settings.
METHODS: A comprehensive literature search was conducted in PubMed, Scopus, and Web of Science databases, covering articles from January 2000 to October 2024. Search terms included "artificial intelligence," "machine learning," "deep learning," and "PACS," combined with keywords related to diagnostic accuracy and workflow optimization. Articles were selected based on predefined inclusion and exclusion criteria, focusing on peerreviewed studies that discussed AI applications in PACS, innovations in medical imaging, and workflow improvements. A total of 183 studies met the inclusion criteria, comprising original research, systematic reviews, and meta-analyses.
RESULTS: AI integration in PACS has significantly enhanced diagnostic accuracy, achieving improvements of up to 93.2% in some imaging modalities, such as early tumor detection and anomaly identification. Workflow efficiency has been transformed, with diagnostic times reduced by up to 90% for critical conditions like intracranial hemorrhages. Convolutional neural networks (CNNs) have demonstrated exceptional performance in image segmentation, achieving up to 94% accuracy, and in motion artifact correction, further enhancing diagnostic precision. Natural language processing (NLP) tools have expedited radiology workflows, reducing reporting times by 30-50% and improving consistency in report generation. Cloudbased solutions have also improved accessibility, enabling real-time collaboration and remote diagnostics. However, challenges in data privacy, regulatory compliance, and interoperability persist, emphasizing the need for standardized frameworks and robust security protocols. Conclusions The integration of AI into PACS represents a pivotal transformation in medical imaging, offering improved diagnostic workflows and potential for personalized patient care. Addressing existing challenges and enhancing interoperability will be essential for maximizing the benefits of AIpowered PACS in healthcare.
Additional Links: PMID-40265427
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40265427,
year = {2025},
author = {Pérez-Sanpablo, AI and Quinzaños-Fresnedo, J and Gutiérrez-MartÃnez, J and Lozano-RodrÃguez, IG and Roldan-Valadez, E},
title = {Transforming Medical Imaging: The Role of Artificial Intelligence Integration in PACS for Enhanced Diagnostic Accuracy and Workflow Efficiency.},
journal = {Current medical imaging},
volume = {},
number = {},
pages = {},
doi = {10.2174/0115734056370620250403030638},
pmid = {40265427},
issn = {1573-4056},
abstract = {INTRODUCTION: To examine the integration of artificial intelligence (AI) into Picture Archiving and Communication Systems (PACS) and assess its impact on medical imaging, diagnostic workflows, and patient outcomes. This review explores the technological evolution, key advancements, and challenges associated with AI-enhanced PACS in healthcare settings.
METHODS: A comprehensive literature search was conducted in PubMed, Scopus, and Web of Science databases, covering articles from January 2000 to October 2024. Search terms included "artificial intelligence," "machine learning," "deep learning," and "PACS," combined with keywords related to diagnostic accuracy and workflow optimization. Articles were selected based on predefined inclusion and exclusion criteria, focusing on peerreviewed studies that discussed AI applications in PACS, innovations in medical imaging, and workflow improvements. A total of 183 studies met the inclusion criteria, comprising original research, systematic reviews, and meta-analyses.
RESULTS: AI integration in PACS has significantly enhanced diagnostic accuracy, achieving improvements of up to 93.2% in some imaging modalities, such as early tumor detection and anomaly identification. Workflow efficiency has been transformed, with diagnostic times reduced by up to 90% for critical conditions like intracranial hemorrhages. Convolutional neural networks (CNNs) have demonstrated exceptional performance in image segmentation, achieving up to 94% accuracy, and in motion artifact correction, further enhancing diagnostic precision. Natural language processing (NLP) tools have expedited radiology workflows, reducing reporting times by 30-50% and improving consistency in report generation. Cloudbased solutions have also improved accessibility, enabling real-time collaboration and remote diagnostics. However, challenges in data privacy, regulatory compliance, and interoperability persist, emphasizing the need for standardized frameworks and robust security protocols. Conclusions The integration of AI into PACS represents a pivotal transformation in medical imaging, offering improved diagnostic workflows and potential for personalized patient care. Addressing existing challenges and enhancing interoperability will be essential for maximizing the benefits of AIpowered PACS in healthcare.},
}
RevDate: 2025-04-22
Smart IoT-driven biosensors for EEG-based driving fatigue detection: A CNN-XGBoost model enhancing healthcare quality.
BioImpacts : BI, 15:30586.
INTRODUCTION: Drowsy driving is a significant contributor to accidents, accounting for 35 to 45% of all crashes. Implementation of an internet of things (IoT) system capable of alerting fatigued drivers has the potential to substantially reduce road fatalities and associated issues. Often referred to as the internet of medical things (IoMT), this system leverages a combination of biosensors, actuators, detectors, cloud-based and edge computing, machine intelligence, and communication networks to deliver reliable performance and enhance quality of life in smart societies.
METHODS: Electroencephalogram (EEG) signals offer potential insights into fatigue detection. However, accurately identifying fatigue from brain signals is challenging due to inter-individual EEG variability and the difficulty of collecting sufficient data during periods of exhaustion. To address these challenges, a novel evolutionary optimization method combining convolutional neural networks (CNNs) and XGBoost, termed CNN-XGBoost Evolutionary Learning, was proposed to improve fatigue identification accuracy. The research explored various subbands of decomposed EEG data and introduced an innovative approach of transforming EEG recordings into RGB scalograms. These scalogram images were processed using a 2D Convolutional Neural Network (2DCNN) to extract essential features, which were subsequently fed into a dense layer for training.
RESULTS: The resulting model achieved a noteworthy accuracy of 99.80% on a substantial driver fatigue dataset, surpassing existing methods.
CONCLUSION: By integrating this approach into an IoT framework, researchers effectively addressed previous challenges and established an artificial intelligence of things (AIoT) infrastructure for critical driving conditions. This IoT-based system optimizes data processing, reduces computational complexity, and enhances overall system performance, enabling accurate and timely detection of fatigue in extreme driving environments.
Additional Links: PMID-40256223
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40256223,
year = {2025},
author = {Rezaee, K and Nazerian, A and Ghayoumi Zadeh, H and Attar, H and Khosravi, M and Kanan, M},
title = {Smart IoT-driven biosensors for EEG-based driving fatigue detection: A CNN-XGBoost model enhancing healthcare quality.},
journal = {BioImpacts : BI},
volume = {15},
number = {},
pages = {30586},
pmid = {40256223},
issn = {2228-5652},
abstract = {INTRODUCTION: Drowsy driving is a significant contributor to accidents, accounting for 35 to 45% of all crashes. Implementation of an internet of things (IoT) system capable of alerting fatigued drivers has the potential to substantially reduce road fatalities and associated issues. Often referred to as the internet of medical things (IoMT), this system leverages a combination of biosensors, actuators, detectors, cloud-based and edge computing, machine intelligence, and communication networks to deliver reliable performance and enhance quality of life in smart societies.
METHODS: Electroencephalogram (EEG) signals offer potential insights into fatigue detection. However, accurately identifying fatigue from brain signals is challenging due to inter-individual EEG variability and the difficulty of collecting sufficient data during periods of exhaustion. To address these challenges, a novel evolutionary optimization method combining convolutional neural networks (CNNs) and XGBoost, termed CNN-XGBoost Evolutionary Learning, was proposed to improve fatigue identification accuracy. The research explored various subbands of decomposed EEG data and introduced an innovative approach of transforming EEG recordings into RGB scalograms. These scalogram images were processed using a 2D Convolutional Neural Network (2DCNN) to extract essential features, which were subsequently fed into a dense layer for training.
RESULTS: The resulting model achieved a noteworthy accuracy of 99.80% on a substantial driver fatigue dataset, surpassing existing methods.
CONCLUSION: By integrating this approach into an IoT framework, researchers effectively addressed previous challenges and established an artificial intelligence of things (AIoT) infrastructure for critical driving conditions. This IoT-based system optimizes data processing, reduces computational complexity, and enhances overall system performance, enabling accurate and timely detection of fatigue in extreme driving environments.},
}
RevDate: 2025-04-22
CmpDate: 2025-04-19
Heuristically enhanced multi-head attention based recurrent neural network for denial of wallet attacks detection on serverless computing environment.
Scientific reports, 15(1):13538.
Denial of Wallet (DoW) attacks are a cyber threat designed to utilize and deplete an organization's financial resources by generating excessive prices or charges in their cloud computing (CC) and serverless computing platforms. These threats are primarily appropriate in serverless manners because of features such as auto-scaling, pay-as-you-go, restricted control, and cost growth. Serverless computing, frequently recognized as Function-as-a-Service (FaaS), is a CC method that permits designers to construct and run uses without the requirement to accomplish typical server structure. Detecting DoW threats involves monitoring and analyzing the system-level resource consumption of specific bare-metal mechanisms. Efficient and precise detection of internal DoW threats remains a crucial challenge. Timely recognition is significant in preventing potential damage, as DoW attacks exploit the financial model of serverless environments, impacting the cost structure and operational integrity of services. In this study, a Multi-Head Attention-based Recurrent Neural Network for Denial of Wallet Attacks Detection (MHARNN-DoWAD) technique is developed. The MHARNN-DoWAD method enables the detection of DoW attacks on serverless computing environments. At first, the presented MHARNN-DoWAD model performs data preprocessing by using min-max normalization to convert input data into constant format. Next, the wolf pack predation (WPP) method is employed for feature selection. The detection and classification of DoW attacks, the multi-head attention-based bi-directional gated recurrent unit (MHA-BiGRU) model is utilized. Eventually, the improved secretary bird optimizer algorithm (ISBOA)-based hyperparameter choice process is accomplished to optimize the detection results of the MHA-BiGRU model. A comprehensive set of simulations was conducted to demonstrate the promising results of the MHARNN-DoWAD method. The experimental validation of the MHARNN-DoWAD technique portrayed a superior accuracy value of 98.30% over existing models.
Additional Links: PMID-40253394
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40253394,
year = {2025},
author = {Alzakari, SA and Alamgeer, M and Alashjaee, AM and Abdullah, M and Abdul Sattar, KN and Alshuhail, A and Alzahrani, AA and Alkharashi, A},
title = {Heuristically enhanced multi-head attention based recurrent neural network for denial of wallet attacks detection on serverless computing environment.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {13538},
pmid = {40253394},
issn = {2045-2322},
mesh = {*Neural Networks, Computer ; *Computer Security ; *Cloud Computing ; Algorithms ; Heuristics ; Recurrent Neural Networks ; },
abstract = {Denial of Wallet (DoW) attacks are a cyber threat designed to utilize and deplete an organization's financial resources by generating excessive prices or charges in their cloud computing (CC) and serverless computing platforms. These threats are primarily appropriate in serverless manners because of features such as auto-scaling, pay-as-you-go, restricted control, and cost growth. Serverless computing, frequently recognized as Function-as-a-Service (FaaS), is a CC method that permits designers to construct and run uses without the requirement to accomplish typical server structure. Detecting DoW threats involves monitoring and analyzing the system-level resource consumption of specific bare-metal mechanisms. Efficient and precise detection of internal DoW threats remains a crucial challenge. Timely recognition is significant in preventing potential damage, as DoW attacks exploit the financial model of serverless environments, impacting the cost structure and operational integrity of services. In this study, a Multi-Head Attention-based Recurrent Neural Network for Denial of Wallet Attacks Detection (MHARNN-DoWAD) technique is developed. The MHARNN-DoWAD method enables the detection of DoW attacks on serverless computing environments. At first, the presented MHARNN-DoWAD model performs data preprocessing by using min-max normalization to convert input data into constant format. Next, the wolf pack predation (WPP) method is employed for feature selection. The detection and classification of DoW attacks, the multi-head attention-based bi-directional gated recurrent unit (MHA-BiGRU) model is utilized. Eventually, the improved secretary bird optimizer algorithm (ISBOA)-based hyperparameter choice process is accomplished to optimize the detection results of the MHA-BiGRU model. A comprehensive set of simulations was conducted to demonstrate the promising results of the MHARNN-DoWAD method. The experimental validation of the MHARNN-DoWAD technique portrayed a superior accuracy value of 98.30% over existing models.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Neural Networks, Computer
*Computer Security
*Cloud Computing
Algorithms
Heuristics
Recurrent Neural Networks
RevDate: 2025-04-21
Exploiting Trusted Execution Environments and Distributed Computation for Genomic Association Tests.
IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].
Breakthroughs in sequencing technologies led to an exponential growth of genomic data, providing novel biological insights and therapeutic applications. However, analyzing large amounts of sensitive data raises key data privacy concerns, specifically when the information is outsourced to untrusted third-party infrastructures for data storage and processing (e.g., cloud computing). We introduce Gyosa, a secure and privacy-preserving distributed genomic analysis solution. By leveraging trusted execution environments (TEEs), Gyosa allows users to confidentially delegate their GWAS analysis to untrusted infrastructures. Gyosa implements a computation partitioning scheme that reduces the computation done inside the TEEs while safeguarding the users' genomic data privacy. By integrating this security scheme in Glow, Gyosa provides a secure and distributed environment that facilitates diverse GWAS studies. The experimental evaluation validates the applicability and scalability of Gyosa, reinforcing its ability to provide enhanced security guarantees.
Additional Links: PMID-40249680
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40249680,
year = {2025},
author = {Brito, CV and Ferreira, PG and Paulo, JT},
title = {Exploiting Trusted Execution Environments and Distributed Computation for Genomic Association Tests.},
journal = {IEEE journal of biomedical and health informatics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/JBHI.2025.3562364},
pmid = {40249680},
issn = {2168-2208},
abstract = {Breakthroughs in sequencing technologies led to an exponential growth of genomic data, providing novel biological insights and therapeutic applications. However, analyzing large amounts of sensitive data raises key data privacy concerns, specifically when the information is outsourced to untrusted third-party infrastructures for data storage and processing (e.g., cloud computing). We introduce Gyosa, a secure and privacy-preserving distributed genomic analysis solution. By leveraging trusted execution environments (TEEs), Gyosa allows users to confidentially delegate their GWAS analysis to untrusted infrastructures. Gyosa implements a computation partitioning scheme that reduces the computation done inside the TEEs while safeguarding the users' genomic data privacy. By integrating this security scheme in Glow, Gyosa provides a secure and distributed environment that facilitates diverse GWAS studies. The experimental evaluation validates the applicability and scalability of Gyosa, reinforcing its ability to provide enhanced security guarantees.},
}
RevDate: 2025-04-20
Radiology AI and sustainability paradox: environmental, economic, and social dimensions.
Insights into imaging, 16(1):88.
Artificial intelligence (AI) is transforming radiology by improving diagnostic accuracy, streamlining workflows, and enhancing operational efficiency. However, these advancements come with significant sustainability challenges across environmental, economic, and social dimensions. AI systems, particularly deep learning models, require substantial computational resources, leading to high energy consumption, increased carbon emissions, and hardware waste. Data storage and cloud computing further exacerbate the environmental impact. Economically, the high costs of implementing AI tools often outweigh the demonstrated clinical benefits, raising concerns about their long-term viability and equity in healthcare systems. Socially, AI risks perpetuating healthcare disparities through biases in algorithms and unequal access to technology. On the other hand, AI has the potential to improve sustainability in healthcare by reducing low-value imaging, optimizing resource allocation, and improving energy efficiency in radiology departments. This review addresses the sustainability paradox of AI from a radiological perspective, exploring its environmental footprint, economic feasibility, and social implications. Strategies to mitigate these challenges are also discussed, alongside a call for action and directions for future research. CRITICAL RELEVANCE STATEMENT: By adopting an informed and holistic approach, the radiology community can ensure that AI's benefits are realized responsibly, balancing innovation with sustainability. This effort is essential to align technological advancements with environmental preservation, economic sustainability, and social equity. KEY POINTS: AI has an ambivalent potential, capable of both exacerbating global sustainability issues and offering increased productivity and accessibility. Addressing AI sustainability requires a broad perspective accounting for environmental impact, economic feasibility, and social implications. By embracing the duality of AI, the radiology community can adopt informed strategies at individual, institutional, and collective levels to maximize its benefits while minimizing negative impacts.
Additional Links: PMID-40244301
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40244301,
year = {2025},
author = {Kocak, B and Ponsiglione, A and Romeo, V and Ugga, L and Huisman, M and Cuocolo, R},
title = {Radiology AI and sustainability paradox: environmental, economic, and social dimensions.},
journal = {Insights into imaging},
volume = {16},
number = {1},
pages = {88},
pmid = {40244301},
issn = {1869-4101},
abstract = {Artificial intelligence (AI) is transforming radiology by improving diagnostic accuracy, streamlining workflows, and enhancing operational efficiency. However, these advancements come with significant sustainability challenges across environmental, economic, and social dimensions. AI systems, particularly deep learning models, require substantial computational resources, leading to high energy consumption, increased carbon emissions, and hardware waste. Data storage and cloud computing further exacerbate the environmental impact. Economically, the high costs of implementing AI tools often outweigh the demonstrated clinical benefits, raising concerns about their long-term viability and equity in healthcare systems. Socially, AI risks perpetuating healthcare disparities through biases in algorithms and unequal access to technology. On the other hand, AI has the potential to improve sustainability in healthcare by reducing low-value imaging, optimizing resource allocation, and improving energy efficiency in radiology departments. This review addresses the sustainability paradox of AI from a radiological perspective, exploring its environmental footprint, economic feasibility, and social implications. Strategies to mitigate these challenges are also discussed, alongside a call for action and directions for future research. CRITICAL RELEVANCE STATEMENT: By adopting an informed and holistic approach, the radiology community can ensure that AI's benefits are realized responsibly, balancing innovation with sustainability. This effort is essential to align technological advancements with environmental preservation, economic sustainability, and social equity. KEY POINTS: AI has an ambivalent potential, capable of both exacerbating global sustainability issues and offering increased productivity and accessibility. Addressing AI sustainability requires a broad perspective accounting for environmental impact, economic feasibility, and social implications. By embracing the duality of AI, the radiology community can adopt informed strategies at individual, institutional, and collective levels to maximize its benefits while minimizing negative impacts.},
}
RevDate: 2025-04-16
CmpDate: 2025-04-16
Seasonal patterns of air pollution in Delhi: interplay between meteorological conditions and emission sources.
Environmental geochemistry and health, 47(5):175.
Air pollution (AP) poses a significant public health risk, particularly in developing countries, where it contributes to a growing prevalence of health issues. This study investigates seasonal variations in key air pollutants, including particulate matter, nitrogen dioxide (NO2), sulfur dioxide (SO2), carbon monoxide (CO), and ozone (O3), in New Delhi during 2024. Utilizing Sentinel-5 satellite data processed through the Google earth engine (GEE), a cloud-based geospatial analysis platform, the study evaluates pollutant dynamics during pre-monsoon and post-monsoon seasons. The methodology involved programming in JavaScript to extract pollution parameters, applying cloud filters to eliminate contaminated data, and generating average pollution maps at monthly, seasonal, and annual intervals. The results revealed distinct seasonal pollution patterns. Pre-monsoon root mean square error (RMSE) values for CO, NO2, SO2, and O3 were 0.13, 2.58, 4.62, and 2.36, respectively, while post-monsoon values were 0.17, 2.41, 4.31, and 4.60. Winter months exhibited the highest pollution levels due to increased emissions from biomass burning, vehicular activity, and industrial operations, coupled with atmospheric inversions. Conversely, monsoon months saw a substantial reduction in pollutant levels due to wet deposition and improved dispersion driven by stronger winds. Additionally, post-monsoon crop residue burning emerged as a major episodic pollution source. This study underscores the utility of Sentinel-5 products in monitoring urban air pollution and provides valuable insights for policymakers to develop targeted mitigation strategies, particularly for urban megacities like Delhi, where seasonal and source-specific interventions are crucial for reducing air pollution and its associated health risks.
Additional Links: PMID-40237923
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40237923,
year = {2025},
author = {Ansari, N and Kumari, P and Kumar, R and Kumar, P and Shamshad, A and Hossain, S and Sharma, A and Singh, Y and Kumari, M and Mishra, VN and Rukhsana, and Javed, A},
title = {Seasonal patterns of air pollution in Delhi: interplay between meteorological conditions and emission sources.},
journal = {Environmental geochemistry and health},
volume = {47},
number = {5},
pages = {175},
pmid = {40237923},
issn = {1573-2983},
mesh = {*Seasons ; India ; *Air Pollutants/analysis ; *Air Pollution/analysis/statistics & numerical data ; *Environmental Monitoring ; Carbon Monoxide/analysis ; Sulfur Dioxide/analysis ; Nitrogen Dioxide/analysis ; Particulate Matter/analysis ; Ozone/analysis ; Cities ; Meteorological Concepts ; },
abstract = {Air pollution (AP) poses a significant public health risk, particularly in developing countries, where it contributes to a growing prevalence of health issues. This study investigates seasonal variations in key air pollutants, including particulate matter, nitrogen dioxide (NO2), sulfur dioxide (SO2), carbon monoxide (CO), and ozone (O3), in New Delhi during 2024. Utilizing Sentinel-5 satellite data processed through the Google earth engine (GEE), a cloud-based geospatial analysis platform, the study evaluates pollutant dynamics during pre-monsoon and post-monsoon seasons. The methodology involved programming in JavaScript to extract pollution parameters, applying cloud filters to eliminate contaminated data, and generating average pollution maps at monthly, seasonal, and annual intervals. The results revealed distinct seasonal pollution patterns. Pre-monsoon root mean square error (RMSE) values for CO, NO2, SO2, and O3 were 0.13, 2.58, 4.62, and 2.36, respectively, while post-monsoon values were 0.17, 2.41, 4.31, and 4.60. Winter months exhibited the highest pollution levels due to increased emissions from biomass burning, vehicular activity, and industrial operations, coupled with atmospheric inversions. Conversely, monsoon months saw a substantial reduction in pollutant levels due to wet deposition and improved dispersion driven by stronger winds. Additionally, post-monsoon crop residue burning emerged as a major episodic pollution source. This study underscores the utility of Sentinel-5 products in monitoring urban air pollution and provides valuable insights for policymakers to develop targeted mitigation strategies, particularly for urban megacities like Delhi, where seasonal and source-specific interventions are crucial for reducing air pollution and its associated health risks.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Seasons
India
*Air Pollutants/analysis
*Air Pollution/analysis/statistics & numerical data
*Environmental Monitoring
Carbon Monoxide/analysis
Sulfur Dioxide/analysis
Nitrogen Dioxide/analysis
Particulate Matter/analysis
Ozone/analysis
Cities
Meteorological Concepts
RevDate: 2025-04-16
Design of a Trustworthy Cloud-Native National Digital Health Information Infrastructure for Secure Data Management and Use.
Oxford open digital health, 2:oqae043.
Since 2022, Malawi Ministry of Health (MoH) designated the development of a National Digital Health Information System (NDHIS) as one of the most important pillars of its national health strategy. This system is built upon a distributed computing infrastructure employing the following state-of-art technologies: (i) digital healthcare devices to capture medical data; (ii) Kubernetes-based Cloud-Native Computing architecture to simplify system management and service deployment; (iii) Zero-Trust Secure Communication to protect confidentiality, integrity and access rights of medical data transported over the Internet; (iv) Trusted Computing to allow medical data to be processed by certified software without compromising data privacy and sovereignty. Trustworthiness, including reliability, security, privacy and business integrity, of this system was ensured by a peer-to-peer network of trusted medical information guards deployed as the gatekeepers of the computing facility on this system. This NDHIS can facilitate Malawi to attain universal health coverage by 2030 through its scalability and operation efficiency. It shall improve medical data quality and security by adopting a paperless approach. It will also enable MoH to offer data rental services to healthcare researchers and AI model developers around the world. This project is spearheaded by the Digital Health Division (DHD) under MoH. The trustworthy computing infrastructure was designed by a taskforce assembled by the DHD in collaboration with Luke International in Norway, and a consortium of hardware and software solution providers in Taiwan. A prototype that can connect community clinics with a district hospital has been tested at Taiwan Pingtung Christian Hospital.
Additional Links: PMID-40230982
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40230982,
year = {2024},
author = {Zao, JK and Wu, JT and Kanyimbo, K and Delizy, F and Gan, TT and Kuo, HI and Hsia, CH and Lo, CH and Yang, SH and Richard, CJA and Rajab, B and Monawe, M and Kamanga, B and Mtambalika, N and Yu, KJ and Chou, CF and Neoh, CA and Gallagher, J and O'Donoghue, J and Mtegha, R and Lee, HY and Mbewe, A},
title = {Design of a Trustworthy Cloud-Native National Digital Health Information Infrastructure for Secure Data Management and Use.},
journal = {Oxford open digital health},
volume = {2},
number = {},
pages = {oqae043},
pmid = {40230982},
issn = {2754-4591},
abstract = {Since 2022, Malawi Ministry of Health (MoH) designated the development of a National Digital Health Information System (NDHIS) as one of the most important pillars of its national health strategy. This system is built upon a distributed computing infrastructure employing the following state-of-art technologies: (i) digital healthcare devices to capture medical data; (ii) Kubernetes-based Cloud-Native Computing architecture to simplify system management and service deployment; (iii) Zero-Trust Secure Communication to protect confidentiality, integrity and access rights of medical data transported over the Internet; (iv) Trusted Computing to allow medical data to be processed by certified software without compromising data privacy and sovereignty. Trustworthiness, including reliability, security, privacy and business integrity, of this system was ensured by a peer-to-peer network of trusted medical information guards deployed as the gatekeepers of the computing facility on this system. This NDHIS can facilitate Malawi to attain universal health coverage by 2030 through its scalability and operation efficiency. It shall improve medical data quality and security by adopting a paperless approach. It will also enable MoH to offer data rental services to healthcare researchers and AI model developers around the world. This project is spearheaded by the Digital Health Division (DHD) under MoH. The trustworthy computing infrastructure was designed by a taskforce assembled by the DHD in collaboration with Luke International in Norway, and a consortium of hardware and software solution providers in Taiwan. A prototype that can connect community clinics with a district hospital has been tested at Taiwan Pingtung Christian Hospital.},
}
RevDate: 2025-04-12
Artificial intelligence for the detection of interictal epileptiform discharges in EEG signals.
Revue neurologique pii:S0035-3787(25)00492-8 [Epub ahead of print].
INTRODUCTION: Over the past decades, the integration of modern technologies - such as electronic health records, cloud computing, and artificial intelligence (AI) - has revolutionized the collection, storage, and analysis of medical data in neurology. In epilepsy, Interictal Epileptiform Discharges (IEDs) are the most established biomarker, indicating an increased likelihood of seizures. Their detection traditionally relies on visual EEG assessment, a time-consuming and subjective process contributing to a high misdiagnosis rate. These limitations have spurred the development of automated AI-driven approaches aimed at improving accuracy and efficiency in IED detection.
METHODS: Research on automated IED detection began 45 years ago, spanning from morphological methods to deep learning techniques. In this review, we examine various IED detection approaches, evaluating their performance and limitations.
RESULTS: Traditional machine learning and deep learning methods have produced the most promising results to date, and their application in IED detection continues to grow. Today, AI-driven tools are increasingly integrated into clinical workflows, assisting clinicians in identifying abnormalities while reducing false-positive rates.
DISCUSSION: To optimize the clinical implementation of automated AI-based IED detection, it is essential to render the codes publicly available and to standardize the datasets and metrics. Establishing uniform benchmarks will enable objective model comparisons and help determine which approaches are best suited for clinical use.
Additional Links: PMID-40221359
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40221359,
year = {2025},
author = {Dessevres, E and Valderrama, M and Le Van Quyen, M},
title = {Artificial intelligence for the detection of interictal epileptiform discharges in EEG signals.},
journal = {Revue neurologique},
volume = {},
number = {},
pages = {},
doi = {10.1016/j.neurol.2025.04.001},
pmid = {40221359},
issn = {0035-3787},
abstract = {INTRODUCTION: Over the past decades, the integration of modern technologies - such as electronic health records, cloud computing, and artificial intelligence (AI) - has revolutionized the collection, storage, and analysis of medical data in neurology. In epilepsy, Interictal Epileptiform Discharges (IEDs) are the most established biomarker, indicating an increased likelihood of seizures. Their detection traditionally relies on visual EEG assessment, a time-consuming and subjective process contributing to a high misdiagnosis rate. These limitations have spurred the development of automated AI-driven approaches aimed at improving accuracy and efficiency in IED detection.
METHODS: Research on automated IED detection began 45 years ago, spanning from morphological methods to deep learning techniques. In this review, we examine various IED detection approaches, evaluating their performance and limitations.
RESULTS: Traditional machine learning and deep learning methods have produced the most promising results to date, and their application in IED detection continues to grow. Today, AI-driven tools are increasingly integrated into clinical workflows, assisting clinicians in identifying abnormalities while reducing false-positive rates.
DISCUSSION: To optimize the clinical implementation of automated AI-based IED detection, it is essential to render the codes publicly available and to standardize the datasets and metrics. Establishing uniform benchmarks will enable objective model comparisons and help determine which approaches are best suited for clinical use.},
}
RevDate: 2025-04-14
CmpDate: 2025-04-12
Enhancing Connected Health Ecosystems Through IoT-Enabled Monitoring Technologies: A Case Study of the Monit4Healthy System.
Sensors (Basel, Switzerland), 25(7):.
The Monit4Healthy system is an IoT-enabled health monitoring solution designed to address critical challenges in real-time biomedical signal processing, energy efficiency, and data transmission. The system's modular design merges wireless communication components alongside a number of physiological sensors, including galvanic skin response, electromyography, photoplethysmography, and EKG, to allow for the remote gathering and evaluation of health information. In order to decrease network load and enable the quick identification of abnormalities, edge computing is used for real-time signal filtering and feature extraction. Flexible data transmission based on context and available bandwidth is provided through a hybrid communication approach that includes Bluetooth Low Energy and Wi-Fi. Under typical monitoring scenarios, laboratory testing shows reliable wireless connectivity and ongoing battery-powered operation. The Monit4Healthy system is appropriate for scalable deployment in connected health ecosystems and portable health monitoring due to its responsive power management approaches and structured data transmission, which improve the resiliency of the system. The system ensures the reliability of signals whilst lowering latency and data volume in comparison to conventional cloud-only systems. Limitations include the requirement for energy profiling, distinctive hardware miniaturizing, and sustained real-world validation. By integrating context-aware processing, flexible design, and effective communication, the Monit4Healthy system complements existing IoT health solutions and promotes better integration in clinical and smart city healthcare environments.
Additional Links: PMID-40218804
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40218804,
year = {2025},
author = {Ianculescu, M and Constantin, VȘ and GuÈ™atu, AM and Petrache, MC and Mihăescu, AG and Bica, O and Alexandru, A},
title = {Enhancing Connected Health Ecosystems Through IoT-Enabled Monitoring Technologies: A Case Study of the Monit4Healthy System.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {7},
pages = {},
pmid = {40218804},
issn = {1424-8220},
mesh = {Humans ; Monitoring, Physiologic/methods ; Wireless Technology ; *Internet of Things ; Signal Processing, Computer-Assisted ; Photoplethysmography ; Electromyography ; Galvanic Skin Response/physiology ; },
abstract = {The Monit4Healthy system is an IoT-enabled health monitoring solution designed to address critical challenges in real-time biomedical signal processing, energy efficiency, and data transmission. The system's modular design merges wireless communication components alongside a number of physiological sensors, including galvanic skin response, electromyography, photoplethysmography, and EKG, to allow for the remote gathering and evaluation of health information. In order to decrease network load and enable the quick identification of abnormalities, edge computing is used for real-time signal filtering and feature extraction. Flexible data transmission based on context and available bandwidth is provided through a hybrid communication approach that includes Bluetooth Low Energy and Wi-Fi. Under typical monitoring scenarios, laboratory testing shows reliable wireless connectivity and ongoing battery-powered operation. The Monit4Healthy system is appropriate for scalable deployment in connected health ecosystems and portable health monitoring due to its responsive power management approaches and structured data transmission, which improve the resiliency of the system. The system ensures the reliability of signals whilst lowering latency and data volume in comparison to conventional cloud-only systems. Limitations include the requirement for energy profiling, distinctive hardware miniaturizing, and sustained real-world validation. By integrating context-aware processing, flexible design, and effective communication, the Monit4Healthy system complements existing IoT health solutions and promotes better integration in clinical and smart city healthcare environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Monitoring, Physiologic/methods
Wireless Technology
*Internet of Things
Signal Processing, Computer-Assisted
Photoplethysmography
Electromyography
Galvanic Skin Response/physiology
RevDate: 2025-04-14
Deep Reinforcement Learning-Enabled Computation Offloading: A Novel Framework to Energy Optimization and Security-Aware in Vehicular Edge-Cloud Computing Networks.
Sensors (Basel, Switzerland), 25(7):.
The Vehicular Edge-Cloud Computing (VECC) paradigm has gained traction as a promising solution to mitigate the computational constraints through offloading resource-intensive tasks to distributed edge and cloud networks. However, conventional computation offloading mechanisms frequently induce network congestion and service delays, stemming from uneven workload distribution across spatial Roadside Units (RSUs). Moreover, ensuring data security and optimizing energy usage within this framework remain significant challenges. To this end, this study introduces a deep reinforcement learning-enabled computation offloading framework for multi-tier VECC networks. First, a dynamic load-balancing algorithm is developed to optimize the balance among RSUs, incorporating real-time analysis of heterogeneous network parameters, including RSU computational load, channel capacity, and proximity-based latency. Additionally, to alleviate congestion in static RSU deployments, the framework proposes deploying UAVs in high-density zones, dynamically augmenting both storage and processing resources. Moreover, an Advanced Encryption Standard (AES)-based mechanism, secured with dynamic one-time encryption key generation, is implemented to fortify data confidentiality during transmissions. Further, a context-aware edge caching strategy is implemented to preemptively store processed tasks, reducing redundant computations and associated energy overheads. Subsequently, a mixed-integer optimization model is formulated that simultaneously minimizes energy consumption and guarantees latency constraint. Given the combinatorial complexity of large-scale vehicular networks, an equivalent reinforcement learning form is given. Then a deep learning-based algorithm is designed to learn close-optimal offloading solutions under dynamic conditions. Empirical evaluations demonstrate that the proposed framework significantly outperforms existing benchmark techniques in terms of energy savings. These results underscore the framework's efficacy in advancing sustainable, secure, and scalable intelligent transportation systems.
Additional Links: PMID-40218550
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40218550,
year = {2025},
author = {Almuseelem, W},
title = {Deep Reinforcement Learning-Enabled Computation Offloading: A Novel Framework to Energy Optimization and Security-Aware in Vehicular Edge-Cloud Computing Networks.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {7},
pages = {},
pmid = {40218550},
issn = {1424-8220},
abstract = {The Vehicular Edge-Cloud Computing (VECC) paradigm has gained traction as a promising solution to mitigate the computational constraints through offloading resource-intensive tasks to distributed edge and cloud networks. However, conventional computation offloading mechanisms frequently induce network congestion and service delays, stemming from uneven workload distribution across spatial Roadside Units (RSUs). Moreover, ensuring data security and optimizing energy usage within this framework remain significant challenges. To this end, this study introduces a deep reinforcement learning-enabled computation offloading framework for multi-tier VECC networks. First, a dynamic load-balancing algorithm is developed to optimize the balance among RSUs, incorporating real-time analysis of heterogeneous network parameters, including RSU computational load, channel capacity, and proximity-based latency. Additionally, to alleviate congestion in static RSU deployments, the framework proposes deploying UAVs in high-density zones, dynamically augmenting both storage and processing resources. Moreover, an Advanced Encryption Standard (AES)-based mechanism, secured with dynamic one-time encryption key generation, is implemented to fortify data confidentiality during transmissions. Further, a context-aware edge caching strategy is implemented to preemptively store processed tasks, reducing redundant computations and associated energy overheads. Subsequently, a mixed-integer optimization model is formulated that simultaneously minimizes energy consumption and guarantees latency constraint. Given the combinatorial complexity of large-scale vehicular networks, an equivalent reinforcement learning form is given. Then a deep learning-based algorithm is designed to learn close-optimal offloading solutions under dynamic conditions. Empirical evaluations demonstrate that the proposed framework significantly outperforms existing benchmark techniques in terms of energy savings. These results underscore the framework's efficacy in advancing sustainable, secure, and scalable intelligent transportation systems.},
}
RevDate: 2025-04-14
EcoTaskSched: a hybrid machine learning approach for energy-efficient task scheduling in IoT-based fog-cloud environments.
Scientific reports, 15(1):12296.
The widespread adoption of cloud services has posed several challenges, primarily revolving around energy and resource efficiency. Integrating cloud and fog resources can help address these challenges by improving fog-cloud computing environments. Nevertheless, the search for optimal task allocation and energy management in such environments continues. Existing studies have introduced notable solutions; however, it is still a challenging issue to efficiently utilize these heterogeneous cloud resources and achieve energy-efficient task scheduling in fog-cloud of things environment. To tackle these challenges, we propose a novel ML-based EcoTaskSched model, which leverages deep learning for energy-efficient task scheduling in fog-cloud networks. The proposed hybrid model integrates Convolutional Neural Networks (CNNs) with Bidirectional Log-Short Term Memory (BiLSTM) to enhance energy-efficient schedulability and reduce energy usage while ensuring QoS provisioning. The CNN model efficiently extracts workload features from tasks and resources, while the BiLSTM captures complex sequential information, predicting optimal task placement sequences. A real fog-cloud environment is implemented using the COSCO framework for the simulation setup together with four physical nodes from the Azure B2s plan to test the proposed model. The DeFog benchmark is used to develop task workloads, and data collection was conducted for both normal and intense workload scenarios. Before preprocessing the data was normalized, treated with feature engineering and augmentation, and then split into training and test sets. To evaluate performance, the proposed EcoTaskSched model demonstrated superiority by significantly reducing energy consumption and improving job completion rates compared to baseline models. Additionally, the EcoTaskSched model maintained a high job completion rate of 85%, outperforming GGCN and BiGGCN. It also achieved a lower average response time, and SLA violation rates, as well as increased throughput, and reduced execution cost compared to other baseline models. In its optimal configuration, the EcoTaskSched model is successfully applied to fog-cloud computing environments, increasing task handling efficiency and reducing energy consumption while maintaining the required QoS parameters. Our future studies will focus on long-term testing of the EcoTaskSched model in real-world IoT environments. We will also assess its applicability by integrating other ML models, which could provide enhanced insights for optimizing scheduling algorithms across diverse fog-cloud settings.
Additional Links: PMID-40211053
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40211053,
year = {2025},
author = {Khan, A and Ullah, F and Shah, D and Khan, MH and Ali, S and Tahir, M},
title = {EcoTaskSched: a hybrid machine learning approach for energy-efficient task scheduling in IoT-based fog-cloud environments.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {12296},
pmid = {40211053},
issn = {2045-2322},
abstract = {The widespread adoption of cloud services has posed several challenges, primarily revolving around energy and resource efficiency. Integrating cloud and fog resources can help address these challenges by improving fog-cloud computing environments. Nevertheless, the search for optimal task allocation and energy management in such environments continues. Existing studies have introduced notable solutions; however, it is still a challenging issue to efficiently utilize these heterogeneous cloud resources and achieve energy-efficient task scheduling in fog-cloud of things environment. To tackle these challenges, we propose a novel ML-based EcoTaskSched model, which leverages deep learning for energy-efficient task scheduling in fog-cloud networks. The proposed hybrid model integrates Convolutional Neural Networks (CNNs) with Bidirectional Log-Short Term Memory (BiLSTM) to enhance energy-efficient schedulability and reduce energy usage while ensuring QoS provisioning. The CNN model efficiently extracts workload features from tasks and resources, while the BiLSTM captures complex sequential information, predicting optimal task placement sequences. A real fog-cloud environment is implemented using the COSCO framework for the simulation setup together with four physical nodes from the Azure B2s plan to test the proposed model. The DeFog benchmark is used to develop task workloads, and data collection was conducted for both normal and intense workload scenarios. Before preprocessing the data was normalized, treated with feature engineering and augmentation, and then split into training and test sets. To evaluate performance, the proposed EcoTaskSched model demonstrated superiority by significantly reducing energy consumption and improving job completion rates compared to baseline models. Additionally, the EcoTaskSched model maintained a high job completion rate of 85%, outperforming GGCN and BiGGCN. It also achieved a lower average response time, and SLA violation rates, as well as increased throughput, and reduced execution cost compared to other baseline models. In its optimal configuration, the EcoTaskSched model is successfully applied to fog-cloud computing environments, increasing task handling efficiency and reducing energy consumption while maintaining the required QoS parameters. Our future studies will focus on long-term testing of the EcoTaskSched model in real-world IoT environments. We will also assess its applicability by integrating other ML models, which could provide enhanced insights for optimizing scheduling algorithms across diverse fog-cloud settings.},
}
RevDate: 2025-04-13
D2D assisted cooperative computational offloading strategy in edge cloud computing networks.
Scientific reports, 15(1):12303.
In the computational offloading problem of edge cloud computing (ECC), almost all researches develop the offloading strategy by optimizing the user cost, but most of them only consider the delay and energy consumption, and seldom consider the task waiting delay. This is very unfavorable for tasks with high sensitive latency requirements in the current era of intelligence. In this paper, by using D2D (Device-to-Device) technology, we propose a D2D-assisted collaboration computational offloading strategy (D-CCO) based on user cost optimization to obtain the offloading decision and the number of tasks that can be offloaded. Specifically, we first build a task queue system with multiple local devices, peer devices and edge processors, and compare the execution performance of computing tasks on different devices, taking into account user costs such as task delay, power consumption, and wait delay. Then, the stochastic optimization algorithm and the back-pressure algorithm are used to develop the offloading strategy, which ensures the stability of the system and reduces the computing cost to the greatest extent, so as to obtain the optimal offloading decision. In addition, the stability of the proposed algorithm is analyzed theoretically, that is, the upper bounds of all queues in the system are derived. The simulation results show the stability of the proposed algorithm, and demonstrate that the D-CCO algorithm is superior to other alternatives. Compared with other algorithms, this algorithm can effectively reduce the user cost.
Additional Links: PMID-40210938
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40210938,
year = {2025},
author = {Wang, Y and Kong, D and Chai, H and Qiu, H and Xue, R and Li, S},
title = {D2D assisted cooperative computational offloading strategy in edge cloud computing networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {12303},
pmid = {40210938},
issn = {2045-2322},
support = {242102210050//Science and Technology Project of Henan Province/ ; 252102210114//Science and Technology Project of Henan Province/ ; 202410467035//National College Students' Innovative Training Program/ ; },
abstract = {In the computational offloading problem of edge cloud computing (ECC), almost all researches develop the offloading strategy by optimizing the user cost, but most of them only consider the delay and energy consumption, and seldom consider the task waiting delay. This is very unfavorable for tasks with high sensitive latency requirements in the current era of intelligence. In this paper, by using D2D (Device-to-Device) technology, we propose a D2D-assisted collaboration computational offloading strategy (D-CCO) based on user cost optimization to obtain the offloading decision and the number of tasks that can be offloaded. Specifically, we first build a task queue system with multiple local devices, peer devices and edge processors, and compare the execution performance of computing tasks on different devices, taking into account user costs such as task delay, power consumption, and wait delay. Then, the stochastic optimization algorithm and the back-pressure algorithm are used to develop the offloading strategy, which ensures the stability of the system and reduces the computing cost to the greatest extent, so as to obtain the optimal offloading decision. In addition, the stability of the proposed algorithm is analyzed theoretically, that is, the upper bounds of all queues in the system are derived. The simulation results show the stability of the proposed algorithm, and demonstrate that the D-CCO algorithm is superior to other alternatives. Compared with other algorithms, this algorithm can effectively reduce the user cost.},
}
RevDate: 2025-04-13
Research on water body information extraction and monitoring in high water table mining areas based on Google Earth Engine.
Scientific reports, 15(1):12133.
The extensive and intensive exploitation of coal resources has led to a particularly prominent issue of water accumulation in high groundwater table mining areas, significantly impacting the surrounding ecological environment and directly threatening the red line of cultivated land and regional food security. To provide a scientific basis for the ecological restoration of water accumulation areas in coal mining subsidence, a study on the extraction of water body information in high groundwater level subsidence areas is conducted. The spectral characteristics of land types within mining subsidence areas were analyzed through the application of the Google Earth Engine (GEE) big data cloud platform and Landsat series imagery. This study addressed technical bottlenecks in applying traditional water indices in mining areas, such as spectral interference from coal slag, under-detection of small water bodies, and misclassification of agricultural fields. An Improved Normalized Difference Water Index (INDWI) was proposed based on the analysis of spectral characteristics of surface objects, in conjunction with the OTSU algorithm. The effectiveness of water body extraction using INDWI was compared with that of Normalized Difference Water Index (NDWI), Enhanced Water Index (EWI), and Modified Normalized Difference Water Index (MNDWI). The results indicated that: (1) The INDWI demonstrated the highest overall accuracy, surpassing 89%, and a Kappa coefficient exceeding 80%. The extraction of water body information in mining areas was significantly superior to that achieved by the other three prevalent water indices. (2) The extraction results of the MNDWI and INDWI water Index generally aligned with the actual conditions. The boundaries of water bodies extracted using MNDWI in mining subsidence areas were somewhat ambiguous, leading to the misidentification of small water accumulation pits and misclassification of certain agricultural fields. In contrast, the extraction results of INDWI exhibited better alignment with the imagery, with no significant identification errors observed. (3) Through the comparison of three typical areas, it was concluded that the clarity of the water body boundary lines extracted by INDWI was higher, with relatively fewer internal noise points, and the soil ridges and bridges within the water bodies were distinctly visible, aligning with the actual situation. The research findings offer a foundation for the formulation of land reclamation and ecological restoration plans in coal mining subsidence areas.
Additional Links: PMID-40204841
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40204841,
year = {2025},
author = {Zhong, A and Wang, Z and Gen, Y},
title = {Research on water body information extraction and monitoring in high water table mining areas based on Google Earth Engine.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {12133},
pmid = {40204841},
issn = {2045-2322},
support = {2241STC60470//Beijing Business Environment Reform and Support Program in the field of ecology and environment/ ; },
abstract = {The extensive and intensive exploitation of coal resources has led to a particularly prominent issue of water accumulation in high groundwater table mining areas, significantly impacting the surrounding ecological environment and directly threatening the red line of cultivated land and regional food security. To provide a scientific basis for the ecological restoration of water accumulation areas in coal mining subsidence, a study on the extraction of water body information in high groundwater level subsidence areas is conducted. The spectral characteristics of land types within mining subsidence areas were analyzed through the application of the Google Earth Engine (GEE) big data cloud platform and Landsat series imagery. This study addressed technical bottlenecks in applying traditional water indices in mining areas, such as spectral interference from coal slag, under-detection of small water bodies, and misclassification of agricultural fields. An Improved Normalized Difference Water Index (INDWI) was proposed based on the analysis of spectral characteristics of surface objects, in conjunction with the OTSU algorithm. The effectiveness of water body extraction using INDWI was compared with that of Normalized Difference Water Index (NDWI), Enhanced Water Index (EWI), and Modified Normalized Difference Water Index (MNDWI). The results indicated that: (1) The INDWI demonstrated the highest overall accuracy, surpassing 89%, and a Kappa coefficient exceeding 80%. The extraction of water body information in mining areas was significantly superior to that achieved by the other three prevalent water indices. (2) The extraction results of the MNDWI and INDWI water Index generally aligned with the actual conditions. The boundaries of water bodies extracted using MNDWI in mining subsidence areas were somewhat ambiguous, leading to the misidentification of small water accumulation pits and misclassification of certain agricultural fields. In contrast, the extraction results of INDWI exhibited better alignment with the imagery, with no significant identification errors observed. (3) Through the comparison of three typical areas, it was concluded that the clarity of the water body boundary lines extracted by INDWI was higher, with relatively fewer internal noise points, and the soil ridges and bridges within the water bodies were distinctly visible, aligning with the actual situation. The research findings offer a foundation for the formulation of land reclamation and ecological restoration plans in coal mining subsidence areas.},
}
RevDate: 2025-04-09
CmpDate: 2025-04-09
Software Quality Injection (QI): A Quality Driven Holistic Approach for Optimising Big Healthcare Data Processing.
Studies in health technology and informatics, 323:141-145.
The rapid growth of big data is driving innovation in software development, with advanced analytics offering transformative opportunities in applied computing. Big Healthcare Data (BHD), characterised by multi-structured and complex data types, requires resilient and scalable architectures to effectively address critical data quality issues. This paper proposes a holistic framework for adopting advanced cloud-computing strategies to manage and optimise the unique characteristics of BHD processing. It outlines a comprehensive approach for ensuring optimal data handling for critical healthcare workflows by enhancing the system's quality attributes. The proposed framework prioritises and dynamically adjusts software functionalities in real-time, harnessing sophisticated orchestration capabilities to manage complex, multi-dimensional healthcare datasets, streamline operations, and bolster system resilience.
Additional Links: PMID-40200462
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40200462,
year = {2025},
author = {Haddad, T and Kumarapeli, P and de Lusignan, S and Khaddaj, S and Barman, S},
title = {Software Quality Injection (QI): A Quality Driven Holistic Approach for Optimising Big Healthcare Data Processing.},
journal = {Studies in health technology and informatics},
volume = {323},
number = {},
pages = {141-145},
doi = {10.3233/SHTI250065},
pmid = {40200462},
issn = {1879-8365},
mesh = {*Big Data ; *Software/standards ; Humans ; *Data Accuracy ; *Cloud Computing/standards ; *Electronic Health Records/organization & administration ; },
abstract = {The rapid growth of big data is driving innovation in software development, with advanced analytics offering transformative opportunities in applied computing. Big Healthcare Data (BHD), characterised by multi-structured and complex data types, requires resilient and scalable architectures to effectively address critical data quality issues. This paper proposes a holistic framework for adopting advanced cloud-computing strategies to manage and optimise the unique characteristics of BHD processing. It outlines a comprehensive approach for ensuring optimal data handling for critical healthcare workflows by enhancing the system's quality attributes. The proposed framework prioritises and dynamically adjusts software functionalities in real-time, harnessing sophisticated orchestration capabilities to manage complex, multi-dimensional healthcare datasets, streamline operations, and bolster system resilience.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Big Data
*Software/standards
Humans
*Data Accuracy
*Cloud Computing/standards
*Electronic Health Records/organization & administration
RevDate: 2025-04-11
CmpDate: 2025-04-09
The RaDiCo information system for rare disease cohorts.
Orphanet journal of rare diseases, 20(1):166.
BACKGROUND: Rare diseases (RDs) clinical care and research face several challenges. Patients are dispersed over large geographic areas, their number per disease is limited, just like the number of researchers involved. Current databases as well as biological collections, when existing, are generally local, of modest size, incomplete, of uneven quality, heterogeneous in format and content, and rarely accessible or standardised to support interoperability. Most disease phenotypes are complex corresponding to multi-systemic conditions, with insufficient interdisciplinary cooperation. Thus emerged the need to generate, within a coordinated, mutualised, secure and interoperable framework, high-quality data from national or international RD cohorts, based on deep phenotyping, including molecular analysis data, notably genotypic. The RaDiCo program objective was to create, under the umbrella of Inserm, a national operational platform dedicated to the development of RD e-cohorts. Its Information System (IS) is presented here.
MATERIAL AND METHODS: Constructed on the cloud computing principle, the RaDiCo platform was designed to promote mutualization and factorization of processes and services, for both clinical epidemiology support and IS. RaDiCo IS is based on an interoperability framework combining a unique RD identifier, data standardisation, FAIR principles, data exchange flows/processes and data security principles compliant with the European GDPR.
RESULTS: RaDiCo IS favours a secure, open-source web application in order to implement and manage online databases and give patients themselves the opportunity to collect their data. It ensures a continuous monitoring of data quality and consistency over time. RaDiCo IS proved to be efficient, currently hosting 13 e-cohorts, covering 67 distinct RDs. As of April 2024, 8063 patients were recruited from 180 specialised RD sites spread across the national territory.
DISCUSSION: The RaDiCo operational platform is equivalent to a national infrastructure. Its IS enables RD e-cohorts to be developed on a shared platform with no limit on size or number. Compliant with the GDPR, it is compatible with the French National Health Data Hub and can be extended to the RDs European Reference Networks (ERNs).
CONCLUSION: RaDiCo provides a robust IS, compatible with the French Data Hub and RDs ERNs, integrated on a RD platform that enables e-cohorts creation, monitoring and analysis.
Additional Links: PMID-40200372
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40200372,
year = {2025},
author = {Landais, P and Gueguen, S and Clement, A and Amselem, S and , },
title = {The RaDiCo information system for rare disease cohorts.},
journal = {Orphanet journal of rare diseases},
volume = {20},
number = {1},
pages = {166},
pmid = {40200372},
issn = {1750-1172},
support = {ANR-10-COHO-0003//Agence Nationale de la Recherche/ ; },
mesh = {*Rare Diseases ; Humans ; *Information Systems ; Databases, Factual ; Cohort Studies ; },
abstract = {BACKGROUND: Rare diseases (RDs) clinical care and research face several challenges. Patients are dispersed over large geographic areas, their number per disease is limited, just like the number of researchers involved. Current databases as well as biological collections, when existing, are generally local, of modest size, incomplete, of uneven quality, heterogeneous in format and content, and rarely accessible or standardised to support interoperability. Most disease phenotypes are complex corresponding to multi-systemic conditions, with insufficient interdisciplinary cooperation. Thus emerged the need to generate, within a coordinated, mutualised, secure and interoperable framework, high-quality data from national or international RD cohorts, based on deep phenotyping, including molecular analysis data, notably genotypic. The RaDiCo program objective was to create, under the umbrella of Inserm, a national operational platform dedicated to the development of RD e-cohorts. Its Information System (IS) is presented here.
MATERIAL AND METHODS: Constructed on the cloud computing principle, the RaDiCo platform was designed to promote mutualization and factorization of processes and services, for both clinical epidemiology support and IS. RaDiCo IS is based on an interoperability framework combining a unique RD identifier, data standardisation, FAIR principles, data exchange flows/processes and data security principles compliant with the European GDPR.
RESULTS: RaDiCo IS favours a secure, open-source web application in order to implement and manage online databases and give patients themselves the opportunity to collect their data. It ensures a continuous monitoring of data quality and consistency over time. RaDiCo IS proved to be efficient, currently hosting 13 e-cohorts, covering 67 distinct RDs. As of April 2024, 8063 patients were recruited from 180 specialised RD sites spread across the national territory.
DISCUSSION: The RaDiCo operational platform is equivalent to a national infrastructure. Its IS enables RD e-cohorts to be developed on a shared platform with no limit on size or number. Compliant with the GDPR, it is compatible with the French National Health Data Hub and can be extended to the RDs European Reference Networks (ERNs).
CONCLUSION: RaDiCo provides a robust IS, compatible with the French Data Hub and RDs ERNs, integrated on a RD platform that enables e-cohorts creation, monitoring and analysis.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Rare Diseases
Humans
*Information Systems
Databases, Factual
Cohort Studies
RevDate: 2025-04-11
A multi-objective approach to load balancing in cloud environments integrating ACO and WWO techniques.
Scientific reports, 15(1):12036.
Effective load balancing and resource allocation are essential in dynamic cloud computing environments, where the demand for rapidity and continuous service is perpetually increasing. This paper introduces an innovative hybrid optimisation method that combines water wave optimization (WWO) and ant colony optimization (ACO) to tackle these challenges effectively. ACO is acknowledged for its proficiency in conducting local searches effectively, facilitating the swift discovery of high-quality solutions. In contrast, WWO specialises in global exploration, guaranteeing extensive coverage of the solution space. Collectively, these methods harness their distinct advantages to enhance various objectives: decreasing response times, maximising resource efficiency, and lowering operational expenses. We assessed the efficacy of our hybrid methodology by conducting extensive simulations using a cloud-sim simulator and a variety of workload trace files. We assessed our methods in comparison to well-established algorithms, such as WWO, genetic algorithm (GA), spider monkey optimization (SMO), and ACO. Key performance indicators, such as task scheduling duration, execution costs, energy consumption, and resource utilisation, were meticulously assessed. The findings demonstrate that the hybrid WWO-ACO approach enhances task scheduling efficiency by 11%, decreases operational expenses by 8%, and lowers energy usage by 12% relative to conventional methods. In addition, the algorithm consistently achieved an impressive equilibrium in resource allocation, with balance values ranging from 0.87 to 0.95. The results emphasise the hybrid WWO-ACO algorithm's substantial impact on improving system performance and customer satisfaction, thereby demonstrating a significant improvement in cloud computing optimisation techniques.
Additional Links: PMID-40200080
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40200080,
year = {2025},
author = {Lilhore, UK and Simaiya, S and Prajapati, YN and Rai, AK and Ghith, ES and Tlija, M and Lamoudan, T and Abdelhamid, AA},
title = {A multi-objective approach to load balancing in cloud environments integrating ACO and WWO techniques.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {12036},
pmid = {40200080},
issn = {2045-2322},
abstract = {Effective load balancing and resource allocation are essential in dynamic cloud computing environments, where the demand for rapidity and continuous service is perpetually increasing. This paper introduces an innovative hybrid optimisation method that combines water wave optimization (WWO) and ant colony optimization (ACO) to tackle these challenges effectively. ACO is acknowledged for its proficiency in conducting local searches effectively, facilitating the swift discovery of high-quality solutions. In contrast, WWO specialises in global exploration, guaranteeing extensive coverage of the solution space. Collectively, these methods harness their distinct advantages to enhance various objectives: decreasing response times, maximising resource efficiency, and lowering operational expenses. We assessed the efficacy of our hybrid methodology by conducting extensive simulations using a cloud-sim simulator and a variety of workload trace files. We assessed our methods in comparison to well-established algorithms, such as WWO, genetic algorithm (GA), spider monkey optimization (SMO), and ACO. Key performance indicators, such as task scheduling duration, execution costs, energy consumption, and resource utilisation, were meticulously assessed. The findings demonstrate that the hybrid WWO-ACO approach enhances task scheduling efficiency by 11%, decreases operational expenses by 8%, and lowers energy usage by 12% relative to conventional methods. In addition, the algorithm consistently achieved an impressive equilibrium in resource allocation, with balance values ranging from 0.87 to 0.95. The results emphasise the hybrid WWO-ACO algorithm's substantial impact on improving system performance and customer satisfaction, thereby demonstrating a significant improvement in cloud computing optimisation techniques.},
}
RevDate: 2025-04-09
Schema: A Quantified Learning Solution to Augment, Assess, and Analyze Learning in Medicine.
Cureus, 17(4):e81803.
Quantified learning is the use of digital technologies, such as mobile applications, cloud-based analytics, machine learning algorithms, and real-time performance tracking systems, to deliver more granular, personalized, and measurable educational experiences and outcomes. These principles, along with horizontal and vertical integrative learning, form the basis of modern learning methods. As we witness a global shift from traditional learning to competency-based education, educators agree that there is a need to promote quantified learning. The increased accessibility of technology in educational institutions has allowed unprecedented innovation in learning. The convergence of mobile computing, cloud computing, and Web 2.0 tools has made such models more practical. Despite this, little has been achieved in medical education, where quantified learning and technology aids are limited to a few institutions and used mainly in simulated classroom environments. This innovation report describes the development, dynamics, and scope of Schema, an app-based e-learning solution designed for undergraduate medical students to promote quantified, integrative, high-yield, and self-directed learning along with feedback-based self-assessment and progress monitoring. Schema is linked to a database of preclinical, paraclinical, and clinical multiple choice questions (MCQs) that it organizes into granular subtopics independent of the core subject. It also monitors the progress and performance of the learner as they solve these MCQs and converts that information into quantifiable visual feedback for the learners, which is used to target, improve, revise, and assess their competency. This is important considering the new generation of medical students open to introducing themselves to technology, novel study techniques, and resources outside the traditional learning environment of a medical school. Schema was made available to medical students as part of an e-learning platform in 2022 to aid their learning. In addition, we also aim to use Schema and the range of possibilities it offers to gain deeper insights into the way we learn medicine.
Additional Links: PMID-40196761
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40196761,
year = {2025},
author = {Sebin, D and Doda, V and Balamani, S},
title = {Schema: A Quantified Learning Solution to Augment, Assess, and Analyze Learning in Medicine.},
journal = {Cureus},
volume = {17},
number = {4},
pages = {e81803},
pmid = {40196761},
issn = {2168-8184},
abstract = {Quantified learning is the use of digital technologies, such as mobile applications, cloud-based analytics, machine learning algorithms, and real-time performance tracking systems, to deliver more granular, personalized, and measurable educational experiences and outcomes. These principles, along with horizontal and vertical integrative learning, form the basis of modern learning methods. As we witness a global shift from traditional learning to competency-based education, educators agree that there is a need to promote quantified learning. The increased accessibility of technology in educational institutions has allowed unprecedented innovation in learning. The convergence of mobile computing, cloud computing, and Web 2.0 tools has made such models more practical. Despite this, little has been achieved in medical education, where quantified learning and technology aids are limited to a few institutions and used mainly in simulated classroom environments. This innovation report describes the development, dynamics, and scope of Schema, an app-based e-learning solution designed for undergraduate medical students to promote quantified, integrative, high-yield, and self-directed learning along with feedback-based self-assessment and progress monitoring. Schema is linked to a database of preclinical, paraclinical, and clinical multiple choice questions (MCQs) that it organizes into granular subtopics independent of the core subject. It also monitors the progress and performance of the learner as they solve these MCQs and converts that information into quantifiable visual feedback for the learners, which is used to target, improve, revise, and assess their competency. This is important considering the new generation of medical students open to introducing themselves to technology, novel study techniques, and resources outside the traditional learning environment of a medical school. Schema was made available to medical students as part of an e-learning platform in 2022 to aid their learning. In addition, we also aim to use Schema and the range of possibilities it offers to gain deeper insights into the way we learn medicine.},
}
RevDate: 2025-04-10
A secure and scalable IoT access control framework with dynamic attribute updates and policy hiding.
Scientific reports, 15(1):11913.
With the rapid rise of Internet of Things (IoT) technology, cloud computing and attribute-based encryption (ABE) are often employed to safeguard the privacy and security of IoT data. However, most blockchain based access control methods are one-way, and user access policies are public, which cannot simultaneously meet the needs of dynamic attribute updates, two-way verification of users and data, and secure data transmission. To handle such challenges, we propose an attribute-based encryption scheme that satisfies real-time and secure sharing requirements through attribute updates and policy hiding. First, we designed a new dynamic update and policy hiding bidirectional attribute access control (DUPH-BAAC) scheme. In addition, a strategy hiding technique was adopted. The data owner sends encrypted addresses with hidden access policies to the blockchain network for verification through transactions. Then, the user locally matches attributes, the smart contract verifies user permissions, and generates access transactions for users who meet access policies. Moreover, the cloud server receives user identity keys and matches the user attribute set with the ciphertext attribute set. Besides, blockchain networks replace traditional IoT centralized servers for identity authentication, authorization, key management, and attribute updates, reducing information leakage risk. Finally, we demonstrate that the DUPH-BAAC scheme can resist indistinguishable choice access structures and selective plaintext attacks, achieving IND-sAS-CPA security.
Additional Links: PMID-40195353
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40195353,
year = {2025},
author = {Xu, Z and Zhou, W and Han, H and Dong, X and Zhang, S and Hu, Z},
title = {A secure and scalable IoT access control framework with dynamic attribute updates and policy hiding.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {11913},
pmid = {40195353},
issn = {2045-2322},
support = {2022BAA040//The Key-Area Research and Development Program of Hubei Province/ ; 2020B1111420002//The Key-Area Research and Development Program of Guangdong Province/ ; 2022-11-4-3//The Science and Technology Project of Department of Transport of Hubei Province/ ; BSQD2019027//The Innovation Fund of Hubei University of Technology/ ; },
abstract = {With the rapid rise of Internet of Things (IoT) technology, cloud computing and attribute-based encryption (ABE) are often employed to safeguard the privacy and security of IoT data. However, most blockchain based access control methods are one-way, and user access policies are public, which cannot simultaneously meet the needs of dynamic attribute updates, two-way verification of users and data, and secure data transmission. To handle such challenges, we propose an attribute-based encryption scheme that satisfies real-time and secure sharing requirements through attribute updates and policy hiding. First, we designed a new dynamic update and policy hiding bidirectional attribute access control (DUPH-BAAC) scheme. In addition, a strategy hiding technique was adopted. The data owner sends encrypted addresses with hidden access policies to the blockchain network for verification through transactions. Then, the user locally matches attributes, the smart contract verifies user permissions, and generates access transactions for users who meet access policies. Moreover, the cloud server receives user identity keys and matches the user attribute set with the ciphertext attribute set. Besides, blockchain networks replace traditional IoT centralized servers for identity authentication, authorization, key management, and attribute updates, reducing information leakage risk. Finally, we demonstrate that the DUPH-BAAC scheme can resist indistinguishable choice access structures and selective plaintext attacks, achieving IND-sAS-CPA security.},
}
RevDate: 2025-04-09
CmpDate: 2025-04-07
Automated mapping of land cover in Google Earth Engine platform using multispectral Sentinel-2 and MODIS image products.
PloS one, 20(4):e0312585.
Land cover mapping often utilizes supervised classification, which can have issues with insufficient sample size and sample confusion, this study assessed the accuracy of a fast and reliable method for automatic labeling and collection of training samples. Based on the self-programming in Google Earth Engine (GEE) cloud-based platform, a large and reliable training dataset of multispectral Sentinel-2 image was extracted automatically across the study area from the existing MODIS land cover product. To enhance confidence in high-quality training class labels, homogeneous 20 m Sentinel-2 pixels within each 500 m MODIS pixel were selected and a minority of heterogeneous 20 m pixels were removed based on calculations of spectral centroid and Euclidean distance. Further, the quality control and spatial filter were applied for all land cover classes to generate a reliable and representative training dataset that was subsequently applied to train the Classification and Regression Tree (CART), Random Forest (RF), and Support Vector Machine (SVM) classifiers. The results shows that the main land cover types in the study area as distinguished by three different classifiers were Evergreen Broadleaf Forests, Mixed Forests, Woody Savannas, and Croplands. In the training and validation samples, the numbers of correctly classified pixels under the CART without computationally intensive were more than those for the RF and SVM classifiers. Moreover, the user's and producer's accuracies, overall accuracy and kappa coefficient of the CART classifier were the best, indicating the CART classifier was more suitable to this automatic workflow for land cover mapping. The proposed method can automatically generate a large number of reliable and accurate training samples in a timely manner, which is promising for future land cover mapping in a large-scale region.
Additional Links: PMID-40193364
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40193364,
year = {2025},
author = {Pan, X and Wang, Z and Feng, G and Wang, S and Samiappan, S},
title = {Automated mapping of land cover in Google Earth Engine platform using multispectral Sentinel-2 and MODIS image products.},
journal = {PloS one},
volume = {20},
number = {4},
pages = {e0312585},
pmid = {40193364},
issn = {1932-6203},
mesh = {Support Vector Machine ; *Satellite Imagery/methods ; Forests ; *Environmental Monitoring/methods ; *Image Processing, Computer-Assisted/methods ; },
abstract = {Land cover mapping often utilizes supervised classification, which can have issues with insufficient sample size and sample confusion, this study assessed the accuracy of a fast and reliable method for automatic labeling and collection of training samples. Based on the self-programming in Google Earth Engine (GEE) cloud-based platform, a large and reliable training dataset of multispectral Sentinel-2 image was extracted automatically across the study area from the existing MODIS land cover product. To enhance confidence in high-quality training class labels, homogeneous 20 m Sentinel-2 pixels within each 500 m MODIS pixel were selected and a minority of heterogeneous 20 m pixels were removed based on calculations of spectral centroid and Euclidean distance. Further, the quality control and spatial filter were applied for all land cover classes to generate a reliable and representative training dataset that was subsequently applied to train the Classification and Regression Tree (CART), Random Forest (RF), and Support Vector Machine (SVM) classifiers. The results shows that the main land cover types in the study area as distinguished by three different classifiers were Evergreen Broadleaf Forests, Mixed Forests, Woody Savannas, and Croplands. In the training and validation samples, the numbers of correctly classified pixels under the CART without computationally intensive were more than those for the RF and SVM classifiers. Moreover, the user's and producer's accuracies, overall accuracy and kappa coefficient of the CART classifier were the best, indicating the CART classifier was more suitable to this automatic workflow for land cover mapping. The proposed method can automatically generate a large number of reliable and accurate training samples in a timely manner, which is promising for future land cover mapping in a large-scale region.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Support Vector Machine
*Satellite Imagery/methods
Forests
*Environmental Monitoring/methods
*Image Processing, Computer-Assisted/methods
RevDate: 2025-04-27
CmpDate: 2025-04-27
Artificial intelligence strategies based on random forests for detection of AI-generated content in public health.
Public health, 242:382-387.
OBJECTIVES: To train and test a Random Forest machine learning model with the ability to distinguish AI-generated from human-generated textual content in the domain of public health, and public health policy.
STUDY DESIGN: Supervised machine learning study.
METHODS: A dataset comprising 1000 human-generated and 1000 AI-generated paragraphs was created. Textual features were extracted using TF-IDF vectorization which calculates term frequency (TF) and Inverse document frequency (IDF), and combines the two measures to produce a score for individual terms. The Random Forest model was trained and tested using the Scikit-Learn library and Jupyter Notebook service in the Google Colab cloud-based environment, with Google CPU hardware acceleration.
RESULTS: The model achieved a classification accuracy of 81.8 % and an area under the ROC curve of 0.9. For human-generated content, precision, recall, and F1-score were 0.85, 0.78, and 0.81, respectively. For AI-generated content, these metrics were 0.79, 0.86, and 0.82. The MCC value of 0.64 indicated moderate to strong predictive power. The model demonstrated robust sensitivity (recall for AI-generated class) of 0.86 and specificity (recall for human-generated class) of 0.78.
CONCLUSIONS: The model exhibited acceptable performance, as measured by classification accuracy, area under the receiver operating characteristic curve, and other metrics. This approach can be further improved by incorporating additional supervised machine learning techniques and serves as a foundation for the future development of a sophisticated and innovative AI system. Such a system could play a crucial role in combating misinformation and enhancing public trust across various government platforms, media outlets, and social networks.
Additional Links: PMID-40188709
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40188709,
year = {2025},
author = {Pantic, IV and Mugosa, S},
title = {Artificial intelligence strategies based on random forests for detection of AI-generated content in public health.},
journal = {Public health},
volume = {242},
number = {},
pages = {382-387},
doi = {10.1016/j.puhe.2025.03.029},
pmid = {40188709},
issn = {1476-5616},
mesh = {Humans ; *Artificial Intelligence ; *Public Health ; *Machine Learning ; *Supervised Machine Learning ; Random Forest ; },
abstract = {OBJECTIVES: To train and test a Random Forest machine learning model with the ability to distinguish AI-generated from human-generated textual content in the domain of public health, and public health policy.
STUDY DESIGN: Supervised machine learning study.
METHODS: A dataset comprising 1000 human-generated and 1000 AI-generated paragraphs was created. Textual features were extracted using TF-IDF vectorization which calculates term frequency (TF) and Inverse document frequency (IDF), and combines the two measures to produce a score for individual terms. The Random Forest model was trained and tested using the Scikit-Learn library and Jupyter Notebook service in the Google Colab cloud-based environment, with Google CPU hardware acceleration.
RESULTS: The model achieved a classification accuracy of 81.8 % and an area under the ROC curve of 0.9. For human-generated content, precision, recall, and F1-score were 0.85, 0.78, and 0.81, respectively. For AI-generated content, these metrics were 0.79, 0.86, and 0.82. The MCC value of 0.64 indicated moderate to strong predictive power. The model demonstrated robust sensitivity (recall for AI-generated class) of 0.86 and specificity (recall for human-generated class) of 0.78.
CONCLUSIONS: The model exhibited acceptable performance, as measured by classification accuracy, area under the receiver operating characteristic curve, and other metrics. This approach can be further improved by incorporating additional supervised machine learning techniques and serves as a foundation for the future development of a sophisticated and innovative AI system. Such a system could play a crucial role in combating misinformation and enhancing public trust across various government platforms, media outlets, and social networks.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Artificial Intelligence
*Public Health
*Machine Learning
*Supervised Machine Learning
Random Forest
RevDate: 2025-04-05
Diversity, functionality, and stability: shaping ecosystem multifunctionality in the successional sequences of alpine meadows and alpine steppes on the Qinghai-Tibet Plateau.
Frontiers in plant science, 16:1436439.
Recent investigations on the Tibetan Plateau have harnessed advancements in digital ground vegetation surveys, high temporal resolution remote sensing data, and sophisticated cloud computing technologies to delineate successional dynamics between alpine meadows and alpine steppes. However, these efforts have not thoroughly explored how different successional stages affect key ecological parameters, such as species and functional diversity, stability, and ecosystem multifunctionality, which are fundamental to ecosystem resilience and adaptability. Given this gap, we systematically investigate variations in vegetation diversity, functional diversity, and the often-overlooked dimension of community stability across the successional gradient from alpine meadows to alpine steppes. We further identify the primary environmental drivers of these changes and evaluate their collective impact on ecosystem multifunctionality. Our analysis reveals that, as vegetation communities progress from alpine meadows toward alpine steppes, multi-year average precipitation and temperature decline significantly, accompanied by reductions in soil nutrients. These environmental shifts led to decreased species diversity, driven by lower precipitation and reduced soil nitrate-nitrogen levels, as well as community differentiation influenced by declining soil pH and precipitation. Consequently, as species loss and community differentiation intensified, these changes diminished functional diversity and eroded community resilience and resistance, ultimately reducing grassland ecosystem multifunctionality. Using linear mixed-effects model and structural equation modeling, we found that functional diversity is the foremost determinant of ecosystem multifunctionality, followed by species diversity. Surprisingly, community stability also significantly influences ecosystem multifunctionality-a factor rarely highlighted in previous studies. These findings deepen our understanding of the interplay among diversity, functionality, stability, and ecosystem multifunctionality, and support the development of an integrated feedback model linking environmental drivers with ecological attributes in alpine grassland ecosystems.
Additional Links: PMID-40182548
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40182548,
year = {2025},
author = {Jin, X and Deng, A and Fan, Y and Ma, K and Zhao, Y and Wang, Y and Zheng, K and Zhou, X and Lu, G},
title = {Diversity, functionality, and stability: shaping ecosystem multifunctionality in the successional sequences of alpine meadows and alpine steppes on the Qinghai-Tibet Plateau.},
journal = {Frontiers in plant science},
volume = {16},
number = {},
pages = {1436439},
pmid = {40182548},
issn = {1664-462X},
abstract = {Recent investigations on the Tibetan Plateau have harnessed advancements in digital ground vegetation surveys, high temporal resolution remote sensing data, and sophisticated cloud computing technologies to delineate successional dynamics between alpine meadows and alpine steppes. However, these efforts have not thoroughly explored how different successional stages affect key ecological parameters, such as species and functional diversity, stability, and ecosystem multifunctionality, which are fundamental to ecosystem resilience and adaptability. Given this gap, we systematically investigate variations in vegetation diversity, functional diversity, and the often-overlooked dimension of community stability across the successional gradient from alpine meadows to alpine steppes. We further identify the primary environmental drivers of these changes and evaluate their collective impact on ecosystem multifunctionality. Our analysis reveals that, as vegetation communities progress from alpine meadows toward alpine steppes, multi-year average precipitation and temperature decline significantly, accompanied by reductions in soil nutrients. These environmental shifts led to decreased species diversity, driven by lower precipitation and reduced soil nitrate-nitrogen levels, as well as community differentiation influenced by declining soil pH and precipitation. Consequently, as species loss and community differentiation intensified, these changes diminished functional diversity and eroded community resilience and resistance, ultimately reducing grassland ecosystem multifunctionality. Using linear mixed-effects model and structural equation modeling, we found that functional diversity is the foremost determinant of ecosystem multifunctionality, followed by species diversity. Surprisingly, community stability also significantly influences ecosystem multifunctionality-a factor rarely highlighted in previous studies. These findings deepen our understanding of the interplay among diversity, functionality, stability, and ecosystem multifunctionality, and support the development of an integrated feedback model linking environmental drivers with ecological attributes in alpine grassland ecosystems.},
}
RevDate: 2025-04-05
Sustainability in construction economics as a barrier to cloud computing adoption in small-scale Building projects.
Scientific reports, 15(1):11329.
The application of intelligent technology to enhance decision-making, optimize processes, and boost project economics and sustainability has the potential to significantly revolutionize the construction industry. However, there are several barriers to its use in small-scale construction projects in China. This study aims to identify these challenges and provide solutions. Using a mixed-methods approach that incorporates quantitative analysis, structural equation modeling, and a comprehensive literature review, the study highlights key problems. These include specialized challenges, difficulty with data integration, financial and cultural constraints, privacy and ethical issues, limited data accessibility, and problems with scalability and connection. The findings demonstrate how important it is to get rid of these barriers to fully utilize intelligent computing in the construction sector. There are recommendations and practical strategies provided to help industry participants get over these challenges. Although the study's geographical emphasis and cross-sectional approach are limitations, they also offer opportunities for further investigation. This study contributes significantly to the growing body of knowledge on intelligent computing in small-scale construction projects and offers practical guidance on how businesses might leverage their transformative potential.
Additional Links: PMID-40175456
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40175456,
year = {2025},
author = {Zonghui, W and Veniaminovna, KO and Vladimirovna, VO and Ivan, K and Isleem, HF},
title = {Sustainability in construction economics as a barrier to cloud computing adoption in small-scale Building projects.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {11329},
pmid = {40175456},
issn = {2045-2322},
abstract = {The application of intelligent technology to enhance decision-making, optimize processes, and boost project economics and sustainability has the potential to significantly revolutionize the construction industry. However, there are several barriers to its use in small-scale construction projects in China. This study aims to identify these challenges and provide solutions. Using a mixed-methods approach that incorporates quantitative analysis, structural equation modeling, and a comprehensive literature review, the study highlights key problems. These include specialized challenges, difficulty with data integration, financial and cultural constraints, privacy and ethical issues, limited data accessibility, and problems with scalability and connection. The findings demonstrate how important it is to get rid of these barriers to fully utilize intelligent computing in the construction sector. There are recommendations and practical strategies provided to help industry participants get over these challenges. Although the study's geographical emphasis and cross-sectional approach are limitations, they also offer opportunities for further investigation. This study contributes significantly to the growing body of knowledge on intelligent computing in small-scale construction projects and offers practical guidance on how businesses might leverage their transformative potential.},
}
RevDate: 2025-04-05
Plasticulture detection at the country scale by combining multispectral and SAR satellite data.
Scientific reports, 15(1):11339.
The use of plastic films has been growing in agriculture, benefiting consumers and producers. However, concerns have been raised about the environmental impact of plastic film use, with mulching films posing a greater threat than greenhouse films. This calls for large-scale monitoring of different plastic film uses. We used cloud computing, freely available optical and radar satellite images, and machine learning to map plastic-mulched farmland (PMF) and plastic cover above vegetation (PCV) (e.g., greenhouse, tunnel) across Germany. The algorithm detected 103 10[3] ha of PMF and 37 10[3] ha of PCV in 2020, while a combination of agricultural statistics and surveys estimated a smaller plasticulture cover of around 100 10[3] ha in 2019. Based on ground observations, the overall accuracy of the classification is 85.3%. Optical and radar features had similar importance scores, and a distinct backscatter of PCV was related to metal frames underneath the plastic films. Overall, the algorithm achieved great results in the distinction between PCV and PMF. This study maps different plastic film uses at a country scale for the first time and sheds light on the high potential of freely available satellite data for continental monitoring.
Additional Links: PMID-40175409
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40175409,
year = {2025},
author = {Fabrizi, A and Fiener, P and Jagdhuber, T and Van Oost, K and Wilken, F},
title = {Plasticulture detection at the country scale by combining multispectral and SAR satellite data.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {11339},
pmid = {40175409},
issn = {2045-2322},
abstract = {The use of plastic films has been growing in agriculture, benefiting consumers and producers. However, concerns have been raised about the environmental impact of plastic film use, with mulching films posing a greater threat than greenhouse films. This calls for large-scale monitoring of different plastic film uses. We used cloud computing, freely available optical and radar satellite images, and machine learning to map plastic-mulched farmland (PMF) and plastic cover above vegetation (PCV) (e.g., greenhouse, tunnel) across Germany. The algorithm detected 103 10[3] ha of PMF and 37 10[3] ha of PCV in 2020, while a combination of agricultural statistics and surveys estimated a smaller plasticulture cover of around 100 10[3] ha in 2019. Based on ground observations, the overall accuracy of the classification is 85.3%. Optical and radar features had similar importance scores, and a distinct backscatter of PCV was related to metal frames underneath the plastic films. Overall, the algorithm achieved great results in the distinction between PCV and PMF. This study maps different plastic film uses at a country scale for the first time and sheds light on the high potential of freely available satellite data for continental monitoring.},
}
RevDate: 2025-04-03
CmpDate: 2025-04-03
The translational impact of bioinformatics on traditional wet lab techniques.
Advances in pharmacology (San Diego, Calif.), 103:287-311.
Bioinformatics has taken a pivotal place in the life sciences field. Not only does it improve, but it also fine-tunes and complements the wet lab experiments. It has been a driving force in the so-called biological sciences, converting them into hypothesis and data-driven fields. This study highlights the translational impact of bioinformatics on experimental biology and discusses its evolution and the advantages it has brought to advancing biological research. Computational analyses make labor-intensive wet lab work cost-effective by reducing the use of expensive reagents. Genome/proteome-wide studies have become feasible due to the efficiency and speed of bioinformatics tools, which can hardly be compared with wet lab experiments. Computational methods provide the scalability essential for manipulating large and complex data of biological origin. AI-integrated bioinformatics studies can unveil important biological patterns that traditional approaches may otherwise overlook. Bioinformatics contributes to hypothesis formation and experiment design, which is pivotal for modern-day multi-omics and systems biology studies. Integrating bioinformatics in the experimental procedures increases reproducibility and helps reduce human errors. Although today's AI-integrated bioinformatics predictions have significantly improved in accuracy over the years, wet lab validation is still unavoidable for confirming these predictions. Challenges persist in multi-omics data integration and analysis, AI model interpretability, and multiscale modeling. Addressing these shortcomings through the latest developments is essential for advancing our knowledge of disease mechanisms, therapeutic strategies, and precision medicine.
Additional Links: PMID-40175046
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40175046,
year = {2025},
author = {Suveena, S and Rekha, AA and Rani, JR and V Oommen, O and Ramakrishnan, R},
title = {The translational impact of bioinformatics on traditional wet lab techniques.},
journal = {Advances in pharmacology (San Diego, Calif.)},
volume = {103},
number = {},
pages = {287-311},
doi = {10.1016/bs.apha.2025.01.012},
pmid = {40175046},
issn = {1557-8925},
mesh = {*Computational Biology/methods ; Humans ; Animals ; *Translational Research, Biomedical/methods ; },
abstract = {Bioinformatics has taken a pivotal place in the life sciences field. Not only does it improve, but it also fine-tunes and complements the wet lab experiments. It has been a driving force in the so-called biological sciences, converting them into hypothesis and data-driven fields. This study highlights the translational impact of bioinformatics on experimental biology and discusses its evolution and the advantages it has brought to advancing biological research. Computational analyses make labor-intensive wet lab work cost-effective by reducing the use of expensive reagents. Genome/proteome-wide studies have become feasible due to the efficiency and speed of bioinformatics tools, which can hardly be compared with wet lab experiments. Computational methods provide the scalability essential for manipulating large and complex data of biological origin. AI-integrated bioinformatics studies can unveil important biological patterns that traditional approaches may otherwise overlook. Bioinformatics contributes to hypothesis formation and experiment design, which is pivotal for modern-day multi-omics and systems biology studies. Integrating bioinformatics in the experimental procedures increases reproducibility and helps reduce human errors. Although today's AI-integrated bioinformatics predictions have significantly improved in accuracy over the years, wet lab validation is still unavoidable for confirming these predictions. Challenges persist in multi-omics data integration and analysis, AI model interpretability, and multiscale modeling. Addressing these shortcomings through the latest developments is essential for advancing our knowledge of disease mechanisms, therapeutic strategies, and precision medicine.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Computational Biology/methods
Humans
Animals
*Translational Research, Biomedical/methods
RevDate: 2025-04-02
CmpDate: 2025-04-03
Innovative computational approaches in drug discovery and design.
Advances in pharmacology (San Diego, Calif.), 103:1-22.
In the current scenario of pandemics, drug discovery and design have undergone a significant transformation due to the integration of advanced computational methodologies. These methodologies utilize sophisticated algorithms, machine learning, artificial intelligence, and high-performance computing to expedite the drug development process, enhances accuracy, and reduces costs. Machine learning and AI have revolutionized predictive modeling, virtual screening, and de novo drug design, allowing for the identification and optimization of novel compounds with desirable properties. Molecular dynamics simulations provide a detailed insight into protein-ligand interactions and conformational changes, facilitating an understanding of drug efficacy at the atomic level. Quantum mechanics/molecular mechanics methods offer precise predictions of binding energies and reaction mechanisms, while structure-based drug design employs docking studies and fragment-based design to improve drug-receptor binding affinities. Network pharmacology and systems biology approaches analyze polypharmacology and biological networks to identify novel drug targets and understand complex interactions. Cheminformatics explores vast chemical spaces and employs data mining to find patterns in large datasets. Computational toxicology predicts adverse effects early in development, reducing reliance on animal testing. Bioinformatics integrates genomic, proteomic, and metabolomics data to discover biomarkers and understand genetic variations affecting drug response. Lastly, cloud computing and big data technologies facilitate high-throughput screening and comprehensive data analysis. Collectively, these computational innovations are driving a paradigm shift in drug discovery and design, making it more efficient, accurate, and cost-effective.
Additional Links: PMID-40175036
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40175036,
year = {2025},
author = {Das, IJ and Bhatta, K and Sarangi, I and Samal, HB},
title = {Innovative computational approaches in drug discovery and design.},
journal = {Advances in pharmacology (San Diego, Calif.)},
volume = {103},
number = {},
pages = {1-22},
doi = {10.1016/bs.apha.2025.01.006},
pmid = {40175036},
issn = {1557-8925},
mesh = {*Drug Discovery/methods ; *Drug Design ; Humans ; Animals ; Machine Learning ; *Computational Biology/methods ; },
abstract = {In the current scenario of pandemics, drug discovery and design have undergone a significant transformation due to the integration of advanced computational methodologies. These methodologies utilize sophisticated algorithms, machine learning, artificial intelligence, and high-performance computing to expedite the drug development process, enhances accuracy, and reduces costs. Machine learning and AI have revolutionized predictive modeling, virtual screening, and de novo drug design, allowing for the identification and optimization of novel compounds with desirable properties. Molecular dynamics simulations provide a detailed insight into protein-ligand interactions and conformational changes, facilitating an understanding of drug efficacy at the atomic level. Quantum mechanics/molecular mechanics methods offer precise predictions of binding energies and reaction mechanisms, while structure-based drug design employs docking studies and fragment-based design to improve drug-receptor binding affinities. Network pharmacology and systems biology approaches analyze polypharmacology and biological networks to identify novel drug targets and understand complex interactions. Cheminformatics explores vast chemical spaces and employs data mining to find patterns in large datasets. Computational toxicology predicts adverse effects early in development, reducing reliance on animal testing. Bioinformatics integrates genomic, proteomic, and metabolomics data to discover biomarkers and understand genetic variations affecting drug response. Lastly, cloud computing and big data technologies facilitate high-throughput screening and comprehensive data analysis. Collectively, these computational innovations are driving a paradigm shift in drug discovery and design, making it more efficient, accurate, and cost-effective.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Drug Discovery/methods
*Drug Design
Humans
Animals
Machine Learning
*Computational Biology/methods
RevDate: 2025-04-04
A secure end-to-end communication framework for cooperative IoT networks using hybrid blockchain system.
Scientific reports, 15(1):11077.
The Internet of Things (IoT) is a disruptive technology that underpins Industry 5.0 by integrating various service technologies to enable intelligent connectivity among smart objects. These technologies enhance the convergence of Information Technology (IT), Operational Technology (OT), Core Technology (CT), and Data Technology (DT) networks, improving automation and decision-making capabilities. While cloud computing has become a mainstream technology across multiple domains, it struggles to efficiently manage the massive volume of OT data generated by IoT devices due to high latency, data transfer costs, limited resilience, and insufficient context awareness. Fog computing has emerged as a viable solution, extending cloud capabilities to the edge through a distributed peer-to-peer (P2P) network, enabling decentralized data processing and management. However, IoT networks still face critical challenges, including connectivity, heterogeneity, scalability, interoperability, security, and real-time decision-making constraints. Security is a key challenge in IoT implementations, including secure data communication, IoT edge and fog device identity, end-to-end authentication, and secure storage. This paper presents an efficient blockchain-based framework that creates a secure end-to-end communication cooperative flow IoT network. The framework utilizes a hybrid blockchain network that collaborates to offer a collaborative flow of end-to-end secure communication from end devices to cloud storage. The fog servers will maintain a private blockchain as a next-generation public key infrastructure to identify and authenticate the IoT's edge devices. The consortium blockchain will be maintained in the cloud and integrated with the permission blockchain system. This system ensures secure cloud storage, authorization, efficient key exchange, and remote protection (encryption) of all sensitive information. To improve the synchronization and block generation, reduce overhead, and ensure scalable IoT network operation, we proposed the threshold signature-based Proof of Stake and Validation (PoSV) consensus. Additionally, lightweight authentication protects resource-constrained IoT nodes using an aggregate signature, ensuring security and performance in real-time scenarios. The proposed system is implemented, and its performance is evaluated using key metrics such as cryptographic processing overhead, consensus efficiency, block acceptance time, and transaction delay. The findings show that threshold signature-based Proof of Stake and Validation (PoSV) consensus, reduces the computational burden of individual signature verification, which results in an optimized transaction latency of 80-150 ms, compared to the previous 100-200 ms without Non-PoSV. Additionally, aggregating multiple signatures from different authentication events reduces signing time by 1.98 ms compared to the individual signature time of 2.72 ms and the overhead of verifying multiple individual transactions is 2.87 ms is significantly reduced to1.46 ms along with authentication delay ranges between 95-180 ms. Hence, the proposed framework improves over existing approaches regarding linear computing complexity, increased cryptographic methods, and a more efficient consensus process.
Additional Links: PMID-40169696
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40169696,
year = {2025},
author = {Erukala, SB and Tokmakov, D and Perumalla, A and Kaluri, R and Bekyarova-Tokmakova, A and Mileva, N and Lubomirov, S},
title = {A secure end-to-end communication framework for cooperative IoT networks using hybrid blockchain system.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {11077},
pmid = {40169696},
issn = {2045-2322},
support = {project No.BG-RRP-2.004-0001-C01//The European Union-NextGeneration EU , Republic of Bulgaria,/ ; },
abstract = {The Internet of Things (IoT) is a disruptive technology that underpins Industry 5.0 by integrating various service technologies to enable intelligent connectivity among smart objects. These technologies enhance the convergence of Information Technology (IT), Operational Technology (OT), Core Technology (CT), and Data Technology (DT) networks, improving automation and decision-making capabilities. While cloud computing has become a mainstream technology across multiple domains, it struggles to efficiently manage the massive volume of OT data generated by IoT devices due to high latency, data transfer costs, limited resilience, and insufficient context awareness. Fog computing has emerged as a viable solution, extending cloud capabilities to the edge through a distributed peer-to-peer (P2P) network, enabling decentralized data processing and management. However, IoT networks still face critical challenges, including connectivity, heterogeneity, scalability, interoperability, security, and real-time decision-making constraints. Security is a key challenge in IoT implementations, including secure data communication, IoT edge and fog device identity, end-to-end authentication, and secure storage. This paper presents an efficient blockchain-based framework that creates a secure end-to-end communication cooperative flow IoT network. The framework utilizes a hybrid blockchain network that collaborates to offer a collaborative flow of end-to-end secure communication from end devices to cloud storage. The fog servers will maintain a private blockchain as a next-generation public key infrastructure to identify and authenticate the IoT's edge devices. The consortium blockchain will be maintained in the cloud and integrated with the permission blockchain system. This system ensures secure cloud storage, authorization, efficient key exchange, and remote protection (encryption) of all sensitive information. To improve the synchronization and block generation, reduce overhead, and ensure scalable IoT network operation, we proposed the threshold signature-based Proof of Stake and Validation (PoSV) consensus. Additionally, lightweight authentication protects resource-constrained IoT nodes using an aggregate signature, ensuring security and performance in real-time scenarios. The proposed system is implemented, and its performance is evaluated using key metrics such as cryptographic processing overhead, consensus efficiency, block acceptance time, and transaction delay. The findings show that threshold signature-based Proof of Stake and Validation (PoSV) consensus, reduces the computational burden of individual signature verification, which results in an optimized transaction latency of 80-150 ms, compared to the previous 100-200 ms without Non-PoSV. Additionally, aggregating multiple signatures from different authentication events reduces signing time by 1.98 ms compared to the individual signature time of 2.72 ms and the overhead of verifying multiple individual transactions is 2.87 ms is significantly reduced to1.46 ms along with authentication delay ranges between 95-180 ms. Hence, the proposed framework improves over existing approaches regarding linear computing complexity, increased cryptographic methods, and a more efficient consensus process.},
}
RevDate: 2025-04-04
A 30-meter resolution global land productivity dynamics dataset from 2013 to 2022.
Scientific data, 12(1):555.
Land degradation is one of the most severe environmental challenges globally. To address its adverse impacts, the United Nations endorsed the Land Degradation Neutrality (SDG 15.3) within the Sustainable Development Goals in 2015. Trends in land productivity is a key sub-indicator for reporting the progress toward SDG 15.3. Currently, the highest spatial resolution of global land productivity dynamics (LPD) products is 250-meter, which seriously hamper the SDG 15.3 reporting and intervention at the fine scale. Generating higher spatial resolution product faces significant challenges, including massive data processing, image cloud pollution, incompatible spatiotemporal resolution. This study, leveraging Google Earth Engine platform and utilizing Landsat-8 and MODIS imagery, employed the Gap-filling and Savitzky-Golay filtering algorithm and advanced spatiotemporal filtering method to obtain a high-quality 30-meter NDVI dataset, then the global 30-meter LPD product from 2013 to 2022 was generated by using the FAO-WOCAT methodology and compared against multiple datasets. This is the first global scale 30-meter LPD dataset, which provides essential data support for SDG 15.3 monitoring and reporting globally.
Additional Links: PMID-40169667
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40169667,
year = {2025},
author = {Li, X and Shen, T and Garcia, CL and Teich, I and Chen, Y and Chen, J and Kabo-Bah, AT and Yang, Z and Jia, X and Lu, Q and Nyamtseren, M},
title = {A 30-meter resolution global land productivity dynamics dataset from 2013 to 2022.},
journal = {Scientific data},
volume = {12},
number = {1},
pages = {555},
pmid = {40169667},
issn = {2052-4463},
abstract = {Land degradation is one of the most severe environmental challenges globally. To address its adverse impacts, the United Nations endorsed the Land Degradation Neutrality (SDG 15.3) within the Sustainable Development Goals in 2015. Trends in land productivity is a key sub-indicator for reporting the progress toward SDG 15.3. Currently, the highest spatial resolution of global land productivity dynamics (LPD) products is 250-meter, which seriously hamper the SDG 15.3 reporting and intervention at the fine scale. Generating higher spatial resolution product faces significant challenges, including massive data processing, image cloud pollution, incompatible spatiotemporal resolution. This study, leveraging Google Earth Engine platform and utilizing Landsat-8 and MODIS imagery, employed the Gap-filling and Savitzky-Golay filtering algorithm and advanced spatiotemporal filtering method to obtain a high-quality 30-meter NDVI dataset, then the global 30-meter LPD product from 2013 to 2022 was generated by using the FAO-WOCAT methodology and compared against multiple datasets. This is the first global scale 30-meter LPD dataset, which provides essential data support for SDG 15.3 monitoring and reporting globally.},
}
RevDate: 2025-04-04
Deep Learning for Ocean Forecasting: A Comprehensive Review of Methods, Applications, and Datasets.
IEEE transactions on cybernetics, PP: [Epub ahead of print].
As a longstanding scientific challenge, accurate and timely ocean forecasting has always been a sought-after goal for ocean scientists. However, traditional theory-driven numerical ocean prediction (NOP) suffers from various challenges, such as the indistinct representation of physical processes, inadequate application of observation assimilation, and inaccurate parameterization of models, which lead to difficulties in obtaining effective knowledge from massive observations, and enormous computational challenges. With the successful evolution of data-driven deep learning in various domains, it has been demonstrated to mine patterns and deep insights from the ever-increasing stream of oceanographic spatiotemporal data, which provides novel possibilities for revolution in ocean forecasting. Deep-learning-based ocean forecasting (DLOF) is anticipated to be a powerful complement to NOP. Nowadays, researchers attempt to introduce deep learning into ocean forecasting and have achieved significant progress that provides novel motivations for ocean science. This article provides a comprehensive review of the state-of-the-art DLOF research regarding model architectures, spatiotemporal multiscales, and interpretability while specifically demonstrating the feasibility of developing hybrid architectures that incorporate theory-driven and data-driven models. Moreover, we comprehensively evaluate DLOF from datasets, benchmarks, and cloud computing. Finally, the limitations of current research and future trends of DLOF are also discussed and prospected.
Additional Links: PMID-40168238
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40168238,
year = {2025},
author = {Hao, R and Zhao, Y and Zhang, S and Deng, X},
title = {Deep Learning for Ocean Forecasting: A Comprehensive Review of Methods, Applications, and Datasets.},
journal = {IEEE transactions on cybernetics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TCYB.2025.3539990},
pmid = {40168238},
issn = {2168-2275},
abstract = {As a longstanding scientific challenge, accurate and timely ocean forecasting has always been a sought-after goal for ocean scientists. However, traditional theory-driven numerical ocean prediction (NOP) suffers from various challenges, such as the indistinct representation of physical processes, inadequate application of observation assimilation, and inaccurate parameterization of models, which lead to difficulties in obtaining effective knowledge from massive observations, and enormous computational challenges. With the successful evolution of data-driven deep learning in various domains, it has been demonstrated to mine patterns and deep insights from the ever-increasing stream of oceanographic spatiotemporal data, which provides novel possibilities for revolution in ocean forecasting. Deep-learning-based ocean forecasting (DLOF) is anticipated to be a powerful complement to NOP. Nowadays, researchers attempt to introduce deep learning into ocean forecasting and have achieved significant progress that provides novel motivations for ocean science. This article provides a comprehensive review of the state-of-the-art DLOF research regarding model architectures, spatiotemporal multiscales, and interpretability while specifically demonstrating the feasibility of developing hybrid architectures that incorporate theory-driven and data-driven models. Moreover, we comprehensively evaluate DLOF from datasets, benchmarks, and cloud computing. Finally, the limitations of current research and future trends of DLOF are also discussed and prospected.},
}
RevDate: 2025-04-01
Optical identification of marine floating debris from Sentinel-2 MSI imagery using radiation signal difference.
Optics letters, 50(7):2330-2333.
A spaceborne optical technique for marine floating debris is developed to detect, discriminate, and quantify such debris, especially that with weak optical signals. The technique uses only the top-of-atmosphere (TOA) signal based on the difference radiative transfer (DRT). DRT unveils diverse optical signals by referencing those within the neighborhood. Using DRT of either simulated signals or Sentinel-2 Multispectral Instrument (MSI) data, target types can be confirmed between the two and pinpointed on a normalized type line. The line, mostly, indicates normalized values of <0.2 for waters, 0.2-0.6 for debris, and >0.8 for algae. The classification limit for MSI is a sub-pixel fraction of 3%; above which, the boundary between debris and algae is distinct, being separated by >three standard deviations. This automated methodology unleashed TOA imagery on data cloud platforms such as Google Earth Engine (GEE) and promoted monitoring after coastal disasters, such as debris dumping and algae blooms.
Additional Links: PMID-40167712
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40167712,
year = {2025},
author = {Zhu, X and Lu, Y and Chen, Y and Wang, F and Dou, C and Ju, W},
title = {Optical identification of marine floating debris from Sentinel-2 MSI imagery using radiation signal difference.},
journal = {Optics letters},
volume = {50},
number = {7},
pages = {2330-2333},
doi = {10.1364/OL.554994},
pmid = {40167712},
issn = {1539-4794},
abstract = {A spaceborne optical technique for marine floating debris is developed to detect, discriminate, and quantify such debris, especially that with weak optical signals. The technique uses only the top-of-atmosphere (TOA) signal based on the difference radiative transfer (DRT). DRT unveils diverse optical signals by referencing those within the neighborhood. Using DRT of either simulated signals or Sentinel-2 Multispectral Instrument (MSI) data, target types can be confirmed between the two and pinpointed on a normalized type line. The line, mostly, indicates normalized values of <0.2 for waters, 0.2-0.6 for debris, and >0.8 for algae. The classification limit for MSI is a sub-pixel fraction of 3%; above which, the boundary between debris and algae is distinct, being separated by >three standard deviations. This automated methodology unleashed TOA imagery on data cloud platforms such as Google Earth Engine (GEE) and promoted monitoring after coastal disasters, such as debris dumping and algae blooms.},
}
RevDate: 2025-04-03
Partial discharge defect recognition method of switchgear based on cloud-edge collaborative deep learning.
Scientific reports, 15(1):10956.
To address the limitations of traditional partial discharge (PD) detection methods for switchgear, which fail to meet the requirements for real-time monitoring, rapid assessment, sample fusion, and joint analysis in practical applications, a joint PD recognition method of switchgear based on edge computing and deep learning is proposed. An edge collaborative defect identification architecture for switchgear is constructed, which includes the terminal device side, terminal collection side, edge-computing side, and cloud-computing side. The PD signal of switchgear is extracted based on UHF sensor and broadband pulse current sensor on the terminal collection side. Multidimensional features are obtained from these signals and a high-dimensional feature space is constructed based on feature extraction and dimensionality reduction on the edge-computing side. On the cloud side, the deep belief network (DBN)-based switchgear PD defect identification method is proposed and the PD samples acquired on the edge side are transmitted in real time to the cloud for training. Upon completion of the training, the resulting model is transmitted back to the edge side for inference, thereby facilitating real-time joint analysis of PD defects across multiple switchgear units. Verification of the proposed method is conducted using PD samples simulated in the laboratory. The results indicate that the DBN proposed in this paper can recognize PDs in switchgear with an accuracy of 88.03%, and under the edge computing architecture, the training time of the switchgear PD defect type classifier can be reduced by 44.28%, overcoming the challenges associated with traditional diagnostic models, which are characterized by long training durations, low identification efficiency, and weak collaborative analysis capabilities.
Additional Links: PMID-40164608
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40164608,
year = {2025},
author = {Jia, Z and Fan, S and Wang, Z and Shao, S and He, D},
title = {Partial discharge defect recognition method of switchgear based on cloud-edge collaborative deep learning.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {10956},
pmid = {40164608},
issn = {2045-2322},
support = {52199719000X//the Research Project of State Grid Sichuan Electric Power Company/ ; },
abstract = {To address the limitations of traditional partial discharge (PD) detection methods for switchgear, which fail to meet the requirements for real-time monitoring, rapid assessment, sample fusion, and joint analysis in practical applications, a joint PD recognition method of switchgear based on edge computing and deep learning is proposed. An edge collaborative defect identification architecture for switchgear is constructed, which includes the terminal device side, terminal collection side, edge-computing side, and cloud-computing side. The PD signal of switchgear is extracted based on UHF sensor and broadband pulse current sensor on the terminal collection side. Multidimensional features are obtained from these signals and a high-dimensional feature space is constructed based on feature extraction and dimensionality reduction on the edge-computing side. On the cloud side, the deep belief network (DBN)-based switchgear PD defect identification method is proposed and the PD samples acquired on the edge side are transmitted in real time to the cloud for training. Upon completion of the training, the resulting model is transmitted back to the edge side for inference, thereby facilitating real-time joint analysis of PD defects across multiple switchgear units. Verification of the proposed method is conducted using PD samples simulated in the laboratory. The results indicate that the DBN proposed in this paper can recognize PDs in switchgear with an accuracy of 88.03%, and under the edge computing architecture, the training time of the switchgear PD defect type classifier can be reduced by 44.28%, overcoming the challenges associated with traditional diagnostic models, which are characterized by long training durations, low identification efficiency, and weak collaborative analysis capabilities.},
}
RevDate: 2025-04-02
Regulating neural data processing in the age of BCIs: Ethical concerns and legal approaches.
Digital health, 11:20552076251326123.
Brain-computer interfaces (BCIs) have seen increasingly fast growth under the help from AI, algorithms, and cloud computing. While providing great benefits for both medical and educational purposes, BCIs involve processing of neural data which are uniquely sensitive due to their most intimate nature, posing unique risks and ethical concerns especially related to privacy and safe control of our neural data. In furtherance of human right protection such as mental privacy, data laws provide more detailed and enforceable rules for processing neural data which may balance the tension between privacy protection and need of the public for wellness promotion and scientific progress through data sharing. This article notes that most of the current data laws like GDPR have not covered neural data clearly, incapable of providing full protection in response to its specialty. The new legislative reforms in the U.S. states of Colorado and California made pioneering advances to incorporate neural data into data privacy laws. Yet regulatory gaps remain as such reforms have not provided special additional rules for neural data processing. Potential problems such as static consent, vague research exceptions, and loopholes in regulating non-personal neural data need to be further addressed. We recommend relevant improved measures taken through amending data laws or making special data acts.
Additional Links: PMID-40162168
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40162168,
year = {2025},
author = {Yang, H and Jiang, L},
title = {Regulating neural data processing in the age of BCIs: Ethical concerns and legal approaches.},
journal = {Digital health},
volume = {11},
number = {},
pages = {20552076251326123},
pmid = {40162168},
issn = {2055-2076},
abstract = {Brain-computer interfaces (BCIs) have seen increasingly fast growth under the help from AI, algorithms, and cloud computing. While providing great benefits for both medical and educational purposes, BCIs involve processing of neural data which are uniquely sensitive due to their most intimate nature, posing unique risks and ethical concerns especially related to privacy and safe control of our neural data. In furtherance of human right protection such as mental privacy, data laws provide more detailed and enforceable rules for processing neural data which may balance the tension between privacy protection and need of the public for wellness promotion and scientific progress through data sharing. This article notes that most of the current data laws like GDPR have not covered neural data clearly, incapable of providing full protection in response to its specialty. The new legislative reforms in the U.S. states of Colorado and California made pioneering advances to incorporate neural data into data privacy laws. Yet regulatory gaps remain as such reforms have not provided special additional rules for neural data processing. Potential problems such as static consent, vague research exceptions, and loopholes in regulating non-personal neural data need to be further addressed. We recommend relevant improved measures taken through amending data laws or making special data acts.},
}
RevDate: 2025-03-31
Authenticable quantum secret sharing based on special entangled state.
Scientific reports, 15(1):10819.
In this paper, a pair of quantum states are constructed based on an orthogonal array and further generalized to multi-body quantum systems. Subsequently, a novel physical process is designed, which is aimed at effectively masking quantum states within multipartite quantum systems. According to this masker, a new authenticable quantum secret sharing scheme is proposed, which can realize a class of special access structures. In the distribution phase, an unknown quantum state is shared safely among multiple participants, and this secret quantum state is embedded into a multi-particle entangled state using the masking approach. In the reconstruction phase, a series of precisely designed measurements and corresponding unitary operations are performed by the participants in the authorized set to restore the original information quantum state. To ensure the security of the scheme, the security analysis of five major types of quantum attacks is conducted. Finally, when compared with other quantum secret sharing schemes based on entangled states, the proposed scheme is found to be not only more flexible but also easier to implement based on existing quantum computing cloud platforms.
Additional Links: PMID-40155754
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid40155754,
year = {2025},
author = {Bai, CM and Shu, YX and Zhang, S},
title = {Authenticable quantum secret sharing based on special entangled state.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {10819},
pmid = {40155754},
issn = {2045-2322},
support = {12301590//National Natural Science Foundation of China/ ; BJ2025061//Science Research Project of Hebei Education Department/ ; },
abstract = {In this paper, a pair of quantum states are constructed based on an orthogonal array and further generalized to multi-body quantum systems. Subsequently, a novel physical process is designed, which is aimed at effectively masking quantum states within multipartite quantum systems. According to this masker, a new authenticable quantum secret sharing scheme is proposed, which can realize a class of special access structures. In the distribution phase, an unknown quantum state is shared safely among multiple participants, and this secret quantum state is embedded into a multi-particle entangled state using the masking approach. In the reconstruction phase, a series of precisely designed measurements and corresponding unitary operations are performed by the participants in the authorized set to restore the original information quantum state. To ensure the security of the scheme, the security analysis of five major types of quantum attacks is conducted. Finally, when compared with other quantum secret sharing schemes based on entangled states, the proposed scheme is found to be not only more flexible but also easier to implement based on existing quantum computing cloud platforms.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.