picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
27 Jun 2025 at 01:42
HITS:
4078
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 27 Jun 2025 at 01:42 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-06-26

Badshah A, Banjar A, Habibullah S, et al (2025)

Social big data management through collaborative mobile, regional, and cloud computing.

PeerJ. Computer science, 11:e2689.

The crowd of smart devices surrounds us all the time. These devices popularize social media platforms (SMP), connecting billions of users. The enhanced functionalities of smart devices generate big data that overutilizes the mainstream network, degrading performance and increasing the overall cost, compromising time-sensitive services. Research indicates that about 75% of connections come from local areas, and their workload does not need to be migrated to remote servers in real-time. Collaboration among mobile edge computing (MEC), regional computing (RC), and cloud computing (CC) can effectively fill these gaps. Therefore, we propose a collaborative structure of mobile, regional, and cloud computing to address the issues arising from social big data (SBD). In this model, it may be easily accessed from the nearest device or server rather than downloading a file from the cloud server. Furthermore, instead of transferring each file to the cloud servers during peak hours, they are initially stored on a regional level and subsequently uploaded to the cloud servers during off-peak hours. The outcomes affirm that this approach significantly reduces the impact of substantial SBD on the performance of mainstream and social network platforms, specifically in terms of delay, response time, and cost.

RevDate: 2025-06-26

Zeng M, Mohamad Hashim MS, Ayob MN, et al (2025)

Intersection collision prediction and prevention based on vehicle-to-vehicle (V2V) and cloud computing communication.

PeerJ. Computer science, 11:e2846.

In modern transportation systems, the management of traffic safety has become increasingly critical as both the number and complexity of vehicles continue to rise. These systems frequently encounter multiple challenges. Consequently, the effective assessment and management of collision risks in various scenarios within transportation systems are paramount to ensuring traffic safety and enhancing road utilization efficiency. In this paper, we tackle the issue of intelligent traffic collision prediction and propose a vehicle collision risk prediction model based on vehicle-to-vehicle (V2V) communication and the graph attention network (GAT). Initially, the framework gathers vehicle trajectory, speed, acceleration, and relative position information via V2V communication technology to construct a graph representation of the traffic environment. Subsequently, the GAT model extracts interaction features between vehicles and optimizes the vehicle driving strategy through deep reinforcement learning (DRL), thereby augmenting the model's decision-making capabilities. Experimental results demonstrate that the framework achieves over 80% collision recognition accuracy concerning true warning rate on both public and real-world datasets. The metrics for false detection are thoroughly analyzed, revealing the efficacy and robustness of the proposed framework. This method introduces a novel technological approach to collision prediction in intelligent transportation systems and holds significant implications for enhancing traffic safety and decision-making efficiency.

RevDate: 2025-06-26

S S, JP P M (2025)

A novel dilated weighted recurrent neural network (RNN)-based smart contract for secure sharing of big data in Ethereum blockchain using hybrid encryption schemes.

PeerJ. Computer science, 11:e2930.

BACKGROUND: With the enhanced data amount being created, it is significant to various organizations and their processing, and managing big data becomes a significant challenge for the managers of the data. The development of inexpensive and new computing systems and cloud computing sectors gave qualified industries to gather and retrieve the data very precisely however securely delivering data across the network with fewer overheads is a demanding work. In the decentralized framework, the big data sharing puts a burden on the internal nodes among the receiver and sender and also creates the congestion in network. The internal nodes that exist to redirect information may have inadequate buffer ability to momentarily take the information and again deliver it to the upcoming nodes that may create the occasional fault in the transmission of data and defeat frequently. Hence, the next node selection to deliver the data is tiresome work, thereby resulting in an enhancement in the total receiving period to allocate the information.

METHODS: Blockchain is the primary distributed device with its own approach to trust. It constructs a reliable framework for decentralized control via multi-node data repetition. Blockchain is involved in offering a transparency to the application of transmission. A simultaneous multi-threading framework confirms quick data channeling to various network receivers in a very short time. Therefore, an advanced method to securely store and transfer the big data in a timely manner is developed in this work. A deep learning-based smart contract is initially designed. The dilated weighted recurrent neural network (DW-RNN) is used to design the smart contract for the Ethereum blockchain. With the aid of the DW-RNN model, the authentication of the user is verified before accessing the data in the Ethereum blockchain. If the authentication of the user is verified, then the smart contracts are assigned to the authorized user. The model uses elliptic Curve ElGamal cryptography (EC-EC), which is a combination of elliptic curve cryptography (ECC) and ElGamal encryption for better security, to make sure that big data transfers on the Ethereum blockchain are safe. The modified Al-Biruni earth radius search optimization (MBERSO) algorithm is used to make the best keys for this EC-EC encryption scheme. This algorithm manages keys efficiently and securely, which improves data security during blockchain operations.

RESULTS: The processes of encryption facilitate the secure transmission of big data over the Ethereum blockchain. Experimental analysis is carried out to prove the efficacy and security offered by the suggested model in transferring big data over blockchain via smart contracts.

RevDate: 2025-06-26

Salih S, Abdelmaboud A, Husain O, et al (2025)

IoT in urban development: insight into smart city applications, case studies, challenges, and future prospects.

PeerJ. Computer science, 11:e2816.

With the integration of Internet of Things (IoT) technology, smart cities possess the capability to advance their public transportation modalities, address prevalent traffic congestion challenges, refine infrastructure, and optimize communication frameworks, thereby augmenting their progression towards heightened urbanization. Through the integration of sensors, cell phones, artificial intelligence (AI), data analytics, and cloud computing, smart cities worldwide are evolving to be more efficient, productive, and responsive to their residents' needs. While the promise of smart cities has been marked over the past decade, notable challenges, especially in the realm of security, threaten their optimal realization. This research provides a comprehensive survey on IoT in smart cities. It focuses on the IoT-based smart city components. Moreover, it provides explanation for integrating different technologies with IoT for smart cities such as AI, sensing technologies, and networking technologies. Additionally, this study provides several case studies for smart cities. In addition, this study investigates the challenges of adopting IoT in smart cities and provides prevention methods for each challenge. Moreover, this study provides future directions for the upcoming researchers. It serves as a foundational guide for stakeholders and emphasizes the pressing need for a balanced integration of innovation and safety in the smart city landscape.

RevDate: 2025-06-26

S N, S D (2025)

Temporal fusion transformer-based strategy for efficient multi-cloud content replication.

PeerJ. Computer science, 11:e2713.

In cloud computing, ensuring the high availability and reliability of data is dominant for efficient content delivery. Content replication across multiple clouds has emerged as a solution to achieve the above. However, managing optimal replication while considering dynamic changes in data popularity and cloud resource availability remains a formidable challenge. In order to address these challenges, this article employs TFT-based Dynamic Data Replication Strategy (TD2RS), leveraging the Temporal Fusion Transformer (TFT), a deep learning temporal forecasting model. This proposed system collects historical data on content popularity and resource availability from multiple cloud sources, which are then used as input to TFT. Then TFT is used to capture temporal patterns and forecasts future data demands. An intelligent replication is performed to optimize content replication across multiple cloud environments based on these forecasts. The framework's performance was validated through extensive experiments using synthetic time-series data simulating with varied cloud resource characteristics. Some of the findings include that the proposed TFT approach improves the availability of data by 20% when compared to traditional replication techniques and also cuts down the latency level by 15%. These outcomes indicate that the TFT-based replication strategy targets to improve content delivery efficiency in the dynamic cloud computing environment, thus providing effective solution to dynamically address the availability, reliability, and performance challenges.

RevDate: 2025-06-26

Ravula V, M Ramaiah (2025)

Enhancing phishing detection with dynamic optimization and character-level deep learning in cloud environments.

PeerJ. Computer science, 11:e2640.

As cloud computing becomes increasingly prevalent, the detection and prevention of phishing URL attacks are essential, particularly in the Internet of Vehicles (IoV) environment, to maintain service reliability. In such a scenario, an attacker could send misleading phishing links, potentially compromising the system's functionality or, at worst, leading to a complete shutdown. To address these emerging threats, this study introduces a novel Dynamic Arithmetic Optimization Algorithm with Deep Learning-Driven Phishing URL Classification (DAOA-DLPC) model for cloud-enabled IoV infrastructure. The candidate's research utilizes character-level embeddings instead of word embeddings, as the former can capture intricate URL patterns more effectively. These embeddings are integrated with a deep learning model, the Multi-Head Attention and Bidirectional Gated Recurrent Units (MHA-BiGRU). To improve precision, hyperparameter tuning has been done using DAOA. The proposed method offers a feasible solution for identifying the phishing URLs, and the method achieves computational efficiency through the attention mechanism and dynamic hyperparameter optimization. The need for this work comes from the observation that the traditional machine learning approaches are not effective in dynamic environments like phishing threat landscapes in a dynamic environment such as the one of phishing threats. The presented DLPC approach is capable of learning new forms of phishing attacks in real time and reduce false positives. The experimental results show that the proposed DAOA-DLPC model outperforms the other models with an accuracy of 98.85%, recall of 98.49%, and F1-score of 98.38% and can effectively detect safe and phishing URLs in dynamic environments. These results imply that the proposed model is useful in distinguishing between safe and unsafe URLs than the conventional models.

RevDate: 2025-06-26

R A, M G (2025)

Improved salp swarm algorithm based optimization of mobile task offloading.

PeerJ. Computer science, 11:e2818.

BACKGROUND: The realization of computation-intensive applications such as real-time video processing, virtual/augmented reality, and face recognition becomes possible for mobile devices with the latest advances in communication technologies. This application requires complex computation for better user experience and real-time decision-making. However, the Internet of Things (IoT) and mobile devices have computational power and limited energy. Executing these computational-intensive tasks on edge devices may result in high energy consumption or high computation latency. In recent times, mobile edge computing (MEC) has been used and modernized for offloading this complex task. In MEC, IoT devices transmit their tasks to edge servers, which consecutively carry out faster computation.

METHODS: However, several IoT devices and edge servers put an upper limit on executing concurrent tasks. Furthermore, implementing a smaller size task (1 KB) over an edge server leads to improved energy consumption. Thus, there is a need to have an optimum range for task offloading so that the energy consumption and response time will be minimal. The evolutionary algorithm is the best for resolving the multiobjective task. Energy, memory, and delay reduction together with the detection of the offloading task is the multiobjective to achieve. Therefore, this study presents an improved salp swarm algorithm-based Mobile Application Offloading Algorithm (ISSA-MAOA) technique for MEC.

RESULTS: This technique harnesses the optimization capabilities of the improved salp swarm algorithm (ISSA) to intelligently allocate computing tasks between mobile devices and the cloud, aiming to concurrently minimize energy consumption, and memory usage, and reduce task completion delays. Through the proposed ISSA-MAOA, the study endeavors to contribute to the enhancement of mobile cloud computing (MCC) frameworks, providing a more efficient and sustainable solution for offloading tasks in mobile applications. The results of this research contribute to better resource management, improved user interactions, and enhanced efficiency in MCC environments.

RevDate: 2025-06-26

Ibrahim K, Sajid A, Ullah I, et al (2025)

Fuzzy inference rule based task offloading model (FI-RBTOM) for edge computing.

PeerJ. Computer science, 11:e2657.

The key objective of edge computing is to reduce delays and provide consumers with high-quality services. However, there are certain challenges, such as high user mobility and the dynamic environments created by IoT devices. Additionally, the limitations of constrained device resources impede effective task completion. The challenge of task offloading plays a crucial role as one of the key challenges for edge computing, which is addressed in this research. An efficient rule-based task-offloading model (FI-RBTOM) is proposed in this context. The key decision of the proposed model is to choose either the task to be offloaded over an edge server or the cloud server or it can be processed over a local node. The four important input parameters are bandwidth, CPU utilization, task length, and task size. The proposed (FI-RBTOM), simulation is carried out using MATLAB (fuzzy logic) tool with 75% training and 25% testing with an overall error rate of 0.39875 is achieved.

RevDate: 2025-06-26

Sang Y, Guo Y, Wang B, et al (2025)

Diversified caching algorithm with cooperation between edge servers.

PeerJ. Computer science, 11:e2824.

Edge computing makes up for the high latency of the central cloud network by deploying server resources in close proximity to users. The storage and other resources configured by edge servers are limited, and a reasonable cache replacement strategy is conducive to improving the cache hit ratio of edge services, thereby reducing service latency and enhancing service quality. The spatiotemporal correlation of user service request distribution brings opportunities and challenges to edge service caching. The collaboration between edge servers is often ignored in the existing research work for caching decisions, which can easily lead to a low edge cache hit rate, thereby reducing the efficiency of edge resource use and service quality. Therefore, this article proposes a diversified caching method to ensure the diversity of edge cache services, utilizing inter-server collaboration to enhance the cache hit rate. After the service request reaches the server, if it misses, the proposed algorithm will judge whether the neighbor node can provide services through the cache information of the neighbor node, and then the server and the neighbor node jointly decide how to cache the service. At the same time, the performance of the proposed diversified caching method is evaluated through a large number of simulation experiments, and the experimental results show that the proposed method can improve the cache hit rate by 27.01-37.43%, reduce the average service delay by 25.57-30.68%, and with the change of the scale of the edge computing platform, the proposed method can maintain good performance.

RevDate: 2018-12-02
CmpDate: 2017-12-25

Long J, MJ Yuan (2017)

A novel clinical decision support algorithm for constructing complete medication histories.

Computer methods and programs in biomedicine, 145:127-133.

A patient's complete medication history is a crucial element for physicians to develop a full understanding of the patient's medical conditions and treatment options. However, due to the fragmented nature of medical data, this process can be very time-consuming and often impossible for physicians to construct a complete medication history for complex patients. In this paper, we describe an accurate, computationally efficient and scalable algorithm to construct a medication history timeline. The algorithm is developed and validated based on 1 million random prescription records from a large national prescription data aggregator. Our evaluation shows that the algorithm can be scaled horizontally on-demand, making it suitable for future delivery in a cloud-computing environment. We also propose that this cloud-based medication history computation algorithm could be integrated into Electronic Medical Records, enabling informed clinical decision-making at the point of care.

RevDate: 2025-06-25

Tran-Van NY, KH Le (2025)

A multimodal skin lesion classification through cross-attention fusion and collaborative edge computing.

Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society, 124:102588 pii:S0895-6111(25)00097-7 [Epub ahead of print].

Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.

RevDate: 2025-06-24
CmpDate: 2025-06-24

Nalina V, Prabhu D, Sahayarayan JJ, et al (2025)

Advancements in AI for Computational Biology and Bioinformatics: A Comprehensive Review.

Methods in molecular biology (Clifton, N.J.), 2952:87-105.

The field of computational biology and bioinformatics has seen remarkable progress in recent years, driven largely by advancements in artificial intelligence (AI) technologies. This review synthesizes the latest developments in AI methodologies and their applications in addressing key challenges within the field of computational biology and bioinformatics. This review begins by outlining fundamental concepts in AI relevant to computational biology, including machine learning algorithms such as neural networks, support vector machines, and decision trees. It then explores how these algorithms have been adapted and optimized for specific tasks in bioinformatics, such as sequence analysis, protein structure prediction, and drug discovery. AI techniques can be integrated with big data analytics, cloud computing, and high-performance computing to handle the vast amounts of biological data generated by modern experimental techniques. The chapter discusses the role of AI in processing and interpreting various types of biological data, including genomic sequences, protein-protein interactions, and gene expression profiles. This chapter highlights recent breakthroughs in AI-driven precision medicine, personalized genomics, and systems biology, showcasing how AI algorithms are revolutionizing our understanding of complex biological systems and driving innovations in healthcare and biotechnology. Additionally, it addresses emerging challenges and future directions in the field, such as the ethical implications of AI in healthcare, the need for robust validation and reproducibility of AI models, and the importance of interdisciplinary collaboration between computer scientists, biologists, and clinicians. In conclusion, this comprehensive review provides insights into the transformative potential of AI in computational biology and bioinformatics, offering a roadmap for future research and development in this rapidly evolving field.

RevDate: 2025-06-24
CmpDate: 2025-06-24

Wira SS, Tan CK, Wong WP, et al (2025)

Cloud-native simulation framework for gossip protocol: Modeling and analyzing network dynamics.

PloS one, 20(6):e0325817 pii:PONE-D-24-43101.

This research paper explores the implementation of gossip protocols in cloud native framework through network modeling and simulation analysis. Gossip protocol is known for their decentralized and fault-tolerant nature. Simulating gossip protocols with conventional tools may face limitations in flexibility and scalability, complicating analysis, especially for larger or more diverse networks. In this paper, gossip protocols are tested within the context of cloud native computing, which leverages its scalability, flexibility, and observability. The study aims to assess the performance and feasibility of gossip protocols within cloud-native settings through a simulated environment. The paper delves into the theoretical foundation of gossip protocol, highlights the core components of cloud native computing, and explains the methodology employed in the simulation. A detailed guide has been provided on utilizing cloud-native frameworks to simulate gossip protocols across varied network environments. The simulation analysis provides insights into gossip protocols' behavior in distributed cloud-native systems, evaluating aspects of scalability, reliability, and observability. This investigation contributes to understanding the practical implications and potential applications of gossip protocol within modern cloud-native architectures, which can also apply to conventional network infrastructure.

RevDate: 2025-06-20

Maiyza AI, Hassan HA, Sheta WM, et al (2025)

VTGAN based proactive VM consolidation in cloud data centers using value and trend approaches.

Scientific reports, 15(1):20133 pii:10.1038/s41598-025-04757-z.

Reducing energy consumption and optimizing resource usage are essential goals for researchers and cloud providers managing large cloud data centers. Recent advancements have demonstrated the effectiveness of virtual machine consolidation and live migrations as viable solutions. However, many existing strategies are based on immediate workload fluctuations to detect host overload or underload and trigger migration processes. This approach can lead to frequent and unnecessary VM migrations, resulting in energy inefficiency, performance degradation, and service-level agreement (SLA) breaches. Moreover, traditional time series and machine learning models often struggle to accurately predict the dynamic nature of cloud workloads. This paper presents a consolidation strategy based on predicting resource utilization to identify overloaded hosts using novel hybrid value trend generative adversarial network (VTGAN) models. These models not only predict future workloads but also forecast workload trends (i.e., the upward or downward direction of the workload). Trend classification can simplify the decision-making process in resource management approaches. We perform simulations using real PlanetLab workloads on Cloudsim to assess the effectiveness of the proposed VTGAN approaches, based on value and trend, compared to the baseline algorithms. The experimental findings demonstrate that the VTGAN (Up current and predicted trends) approach significantly reduces SLA violations and the number of VM migrations by 79% and 56%, respectively, compared to THR-MMT-PBFD. Additionally, incorporating VTGAN into the VM placement algorithm to disregard hosts predicted to become overloaded further improves performance. After excluding these predicted overloaded servers from the placement process, SLA violations and the number of VM migrations are reduced by 84% and 76%, respectively, compared to THR-MMT-PBFD.

RevDate: 2025-06-19

Kumar J, Saxena D, Gupta K, et al (2025)

A Comprehensively Adaptive Architectural Optimization-Ingrained Quantum Neural Network Model for Cloud Workloads Prediction.

IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].

Accurate workload prediction and advanced resource reservation are indispensably crucial for managing dynamic cloud services. Traditional neural networks and deep learning models frequently encounter challenges with diverse, high-dimensional workloads, especially during sudden resource demand changes, leading to inefficiencies. This issue arises from their limited optimization during training, relying only on parametric (interconnection weights) adjustments using conventional algorithms. To address this issue, this work proposes a novel comprehensively adaptive architectural optimization-based variable quantum neural network (CA-QNN), which combines the efficiency of quantum computing with complete structural and qubit vector parametric learning. The model converts workload data into qubits, processed through qubit neurons with controlled not-gated activation functions for intuitive pattern recognition. In addition, a comprehensive architecture optimization algorithm for networks is introduced to facilitate the learning and propagation of the structure and parametric values in variable-sized quantum neural networks (VQNNs). This algorithm incorporates quantum adaptive modulation (QAM) and size-adaptive recombination during the training process. The performance of the CA-QNN model is thoroughly investigated against seven state-of-the-art methods across four benchmark datasets of heterogeneous cloud workloads. The proposed model demonstrates superior prediction accuracy, reducing prediction errors by up to 93.40% and 91.27% compared to existing deep learning and QNN-based approaches.

RevDate: 2025-06-19

Wu Z, Zhu M, Huang Z, et al (2025)

Graphon-Based Visual Abstraction for Large Multi-Layer Networks.

IEEE transactions on visualization and computer graphics, PP: [Epub ahead of print].

Graph visualization techniques provide a foundational framework for offering comprehensive overviews and insights into cloud computing systems, facilitating efficient management and ensuring their availability and reliability. Despite the enhanced computational and storage capabilities of larger-scale cloud computing architectures, they introduce significant challenges to traditional graph-based visualization due to issues of hierarchical heterogeneity, scalability, and data incompleteness. This paper proposes a novel abstraction approach to visualize large multi-layer networks. Our method leverages graphons, a probabilistic representation of network layers, to encompass three core steps: an inner-layer summary to identify stable and volatile substructures, an inter-layer mixup for aligning heterogeneous network layers, and a context-aware multi-layer joint sampling technique aimed at reducing network scale while retaining essential topological characteristics. By abstracting complex network data into manageable weighted graphs, with each graph depicting a distinct network layer, our approach renders these intricate systems accessible on standard computing hardware. We validate our methodology through case studies, quantitative experiments and expert evaluations, demonstrating its effectiveness in managing large multi-layer networks, as well as its applicability to broader network types such as transportation and social networks.

RevDate: 2025-06-16

Sina EM, Pena J, Zafar S, et al (2025)

Automated Machine Learning Classification of Optical Coherence Tomography Images of Retinal Conditions Using Google Cloud Vertex AI.

Retina (Philadelphia, Pa.) pii:00006982-990000000-01081 [Epub ahead of print].

PURPOSE: Automated machine learning (AutoML) is an artificial intelligence (AI) tool that streamlines image recognition model development. This study evaluates the diagnostic performance of Google VertexAI AutoML in differentiating age-related macular degeneration (AMD), diabetic macular edema (DME), epiretinal membrane (ERM), retinal vein occlusion (RVO), and healthy controls using optical coherence tomography (OCT) images.

METHODS: A publicly available, validated OCT dataset of 1965 de-identified images from 759 patients was used. Images were labeled and uploaded to VertexAI. A single-label classification model was trained, validated, and tested using an 80%-10%-10% split. Diagnostic metrics included area under the precision-recall curve (AUPRC), sensitivity, specificity, and positive and negative predictive value (PPV, NPV). A sub-analysis evaluated neovascular versus non-neovascular AMD.

RESULTS: The AutoML model achieved high accuracy (AUPRC = 0.991), with sensitivity, specificity, and PPV of 95.9%, 96.9%, and 95.9%, respectively. AMD classification performed best (AUPRC = 0.999, precision = 98.4%, recall = 99.2%). ERM (AUPRC = 0.978, precision = 92.9%, recall = 86.7%) and DME (AUPRC = 0.895, precision = 81.3%, recall = 86.7%) followed. RVO recall was 80% despite 100% precision. Neovascular AMD outperformed non-neovascular AMD (AUPRC = 0.963 vs. 0.915).

CONCLUSION: Our AutoML model accurately classifies OCT images of retinal conditions, demonstrating performance comparable or superior to traditional ML methods. Its user-friendly design supports scalable AI-driven clinical integration.

RevDate: 2025-06-18
CmpDate: 2025-06-16

Oliullah K, Whaiduzzaman M, Mahi MJN, et al (2025)

A machine learning based authentication and intrusion detection scheme for IoT users anonymity preservation in fog environment.

PloS one, 20(6):e0323954.

Authentication is a critical challenge in fog computing security, especially as fog servers provide services to many IoT users. The conventional authentication process often requires disclosing sensitive personal information, such as usernames, emails, mobile numbers, and passwords that end users are reluctant to share with intermediary services (i.e., Fog servers). With the rapid growth of IoT networks, existing authentication methods often fail to balance low computational overhead with strong security, leaving systems vulnerable to various attacks, including unauthorized access and data interception. Additionally, traditional intrusion detection methods are not well-suited for the distinct characteristics of IoT devices, resulting in a low accuracy in applying existing anomaly detection methods. In this paper, we incorporate a two-step authentication process, starting with anonymous authentication using a secret ID with Elliptic Curve Cryptography (ECC), followed by an intrusion detection algorithm for users flagged as suspicious activity. The scheme allows users to register with a Cloud Service Provider (CSP) using encrypted credentials. The CSP responds with a secret number reserved in the Fog node for the IoT user. To access the services provided by the Fog Service Provider (FSP), IoT users must submit a secret ID. Furthermore, we introduce a staked ensemble learning approach for intrusion detection that achieves 99.86% accuracy, 99.89% precision, 99.96% recall, and a 99.91% F1-score in detecting anomalous instances, with a support count of 50,376. This approach is applied when users fail to provide a correct secret ID. Our proposed scheme utilizes several hash functions through symmetric encryption and decryption techniques to ensure secure end-to-end communication.

RevDate: 2025-06-21

Koning E, Subedi A, R Krishnakumar (2025)

Poplar: a phylogenomics pipeline.

Bioinformatics advances, 5(1):vbaf104.

MOTIVATION: Generating phylogenomic trees from the genomic data is essential in understanding biological systems. Each step of this complex process has received extensive attention and has been significantly streamlined over the years. Given the public availability of data, obtaining genomes for a wide selection of species is straightforward. However, analyzing that data to generate a phylogenomic tree is a multistep process with legitimate scientific and technical challenges, often requiring a significant input from a domain-area scientist.

RESULTS: We present Poplar, a new, streamlined computational pipeline, to address the computational logistical issues that arise when constructing the phylogenomic trees. It provides a framework that runs state-of-the-art software for essential steps in the phylogenomic pipeline, beginning from a genome with or without an annotation, and resulting in a species tree. Running Poplar requires no external databases. In the execution, it enables parallelism for execution for clusters and cloud computing. The trees generated by Poplar match closely with state-of-the-art published trees. The usage and performance of Poplar is far simpler and quicker than manually running a phylogenomic pipeline.

Freely available on GitHub at https://github.com/sandialabs/poplar. Implemented using Python and supported on Linux.

RevDate: 2025-06-14

Langarizadeh M, M Hajebrahimi (2025)

Medical Big Data Storage in Precision Medicine: A Systematic Review.

Journal of biomedical physics & engineering, 15(3):205-220.

BACKGROUND: The characteristics of medical data in Precision Medicine (PM), the challenges related to their storage and retrieval, and the effective facilities to address these challenges are importantly considered in implementing PM. For this purpose, a secured and scalable infrastructure for various data integration and storage is needed.

OBJECTIVE: This study aimed to determine the characteristics of PM data and recognize the challenges and solutions related to appropriate infrastructure for data storage and its related issues.

MATERIAL AND METHODS: In this systematic study, coherent research was conducted on Web of Science, Scopus, PubMed, Embase, and Google Scholar from 2015 to 2023. A total of 16 articles were selected and evaluated based on the inclusion and exclusion criteria and the central search theme of the study.

RESULTS: A total of 1,961 studies were identified from designated databases, 16 articles met the eligibility criteria and were classified into five main sections PM data and its major characteristics based on the volume, variety and velocity (3Vs) of medical big data, data quality issues, appropriate infrastructure for PM data storage, cloud computing and PM infrastructure, and security and privacy. The variety of PM data is categorized into four major categories.

CONCLUSION: A suitable infrastructure for precision medicine should be capable of integrating and storing heterogeneous data from diverse departments and sources. By leveraging big data management experiences from other industries and aligning their characteristics with those in precision medicine, it is possible to facilitate the implementation of precision medicine while avoiding duplication.

RevDate: 2025-06-13

Jourdain S, O'Leary P, Schroeder W, et al (2025)

Trame: Platform Ubiquitous, Scalable Integration Framework for Visual Analytics.

IEEE computer graphics and applications, 45(2):126-134.

Trame is an open-source, Python-based, scalable integration framework for visual analytics. It is the culmination of decades of work-by a large and active community-beginning with the creation of VTK, the growth of ParaView as a premier high-performance, client-server computing system, and more recently the creation of web tools, such as VTK.js and VTK.wasm. As an integration environment, trame relies on open-source standards and tools that can be easily combined into effective computing solutions. We have long recognized that impactful analytics tools must be ubiquitous-meaning they run on all major computing platforms-and integrate/interoperate easily with external packages, such as data systems and processing tools, application UI frameworks, and 2-D/3-D graphical libraries. In this article, we present the architecture and use of trame for applications ranging from simple dashboards to complex workflow-based applications. We also describe examples that readily incorporate external tools and run without coding changes on desktop, mobile, cloud, client-server, and interactive computing notebooks, such as Jupyter.

RevDate: 2025-06-13

Li H, Wang J, H Liu (2025)

Empowering Precision Medicine for Rare Diseases through Cloud Infrastructure Refactoring.

AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science, 2025:300-311.

Rare diseases affect approximately 1 in 11 Americans, yet their diagnosis remains challenging due to limited clinical evidence, low awareness, and lack of definitive treatments. Our project aims to accelerate rare disease diagnosis by developing a comprehensive informatics framework leveraging data mining, semantic web technologies, deep learning, and graph-based embedding techniques. However, our on-premises computational infrastructure faces significant challenges in scalability, maintenance, and collaboration. This study focuses on developing and evaluating a cloud-based computing infrastructure to address these challenges. By migrating to a scalable, secure, and collaborative cloud environment, we aim to enhance data integration, support advanced predictive modeling for differential diagnoses, and facilitate widespread dissemination of research findings to stakeholders, the research community, and the public and also proposed a facilitated through a reliable, standardized workflow designed to ensure minimal disruption and maintain data integrity for existing research project.

RevDate: 2025-06-11
CmpDate: 2025-06-09

Das B, LS Heath (2025)

Variant evolution graph: Can we infer how SARS-CoV-2 variants are evolving?.

PloS one, 20(6):e0323970.

The SARS-CoV-2 virus has undergone extensive mutations over time, resulting in considerable genetic diversity among circulating strains. This diversity directly affects important viral characteristics, such as transmissibility and disease severity. During a viral outbreak, the rapid mutation rate produces a large cloud of variants, referred to as a viral quasispecies. However, many variants are lost due to the bottleneck of transmission and survival. Advances in next-generation sequencing have enabled continuous and cost-effective monitoring of viral genomes, but constructing reliable phylogenetic trees from the vast collection of sequences in GISAID (the Global Initiative on Sharing All Influenza Data) presents significant challenges. We introduce a novel graph-based framework inspired by quasispecies theory, the Variant Evolution Graph (VEG), to model viral evolution. Unlike traditional phylogenetic trees, VEG accommodates multiple ancestors for each variant and maps all possible evolutionary pathways. The strongly connected subgraphs in the VEG reveal critical evolutionary patterns, including recombination events, mutation hotspots, and intra-host viral evolution, providing deeper insights into viral adaptation and spread. We also derive the Disease Transmission Network (DTN) from the VEG, which supports the inference of transmission pathways and super-spreaders among hosts. We have applied our method to genomic data sets from five arbitrarily selected countries - Somalia, Bhutan, Hungary, Iran, and Nepal. Our study compares three methods for computing mutational distances to build the VEG, sourmash, pyani, and edit distance, with the phylogenetic approach using Maximum Likelihood (ML). Among these, ML is the most computationally intensive, requiring multiple sequence alignment and probabilistic inference, making it the slowest. In contrast, sourmash is the fastest, followed by the edit distance approach, while pyani takes more time due to its BLAST-based computations. This comparison highlights the computational efficiency of VEG, making it a scalable alternative for analyzing large viral data sets.

RevDate: 2025-06-11

Aldosari B (2025)

Cybersecurity in Healthcare: New Threat to Patient Safety.

Cureus, 17(5):e83614.

The rapid integration of technology into healthcare systems has brought significant improvements in patient care and operational efficiency, but it has also introduced new cybersecurity challenges. This manuscript explores the evolving landscape of cybersecurity risks in healthcare, with a focus on their potential impact on patient safety and the strategies to mitigate these threats. The rise of interconnected systems, electronic health records (EHRs), and Internet of things (IoT) devices has made safeguarding patient data and healthcare processes increasingly complex. Notable cyber incidents, such as the Anthem Blue Cross breach and the WannaCry ransomware attack, highlight the real-world consequences of these vulnerabilities. The review also examines emerging technologies like AI, cloud computing, telehealth, and wearables, considering their potential benefits and security risks. Best practices for improving healthcare cybersecurity are discussed, including regulatory compliance, risk assessment, data encryption, employee training, and incident response planning. Ultimately, the manuscript emphasizes the ethical responsibility of healthcare organizations to prioritize cybersecurity, ensuring a balance between innovation and security to protect patient data, uphold regulatory standards, and maintain the integrity of healthcare services.

RevDate: 2025-06-12

Palmeira LS, Quintanilha-Peixoto G, da Costa AM, et al (2025)

FUNIN - a fungal glycoside hydrolases 32 enzyme database for developing optimized inulinases.

International journal of biological macromolecules, 318(Pt 2):145050 pii:S0141-8130(25)05603-X [Epub ahead of print].

The enzymatic hydrolysis of inulin, a fructose-rich polysaccharide from plants like Agave spp., is crucial for bioethanol production. Fungal glycoside hydrolase family 32 (GH32) enzymes, especially inulinases, are central to this process, yet no dedicated database existed. To fill this gap, we developed FUNIN, a cloud-based, non-relational database cataloging and analyzing fungal GH32 enzymes relevant to inulin hydrolysis. Built with MongoDB and hosted on AWS, FUNIN integrates enzyme sequences, taxonomic data, physicochemical properties, and annotations from UniProt and InterPro via an automated ELT pipeline. Tools like CLEAN and ProtParam were used for EC number prediction and sequence characterization. The database currently includes 3420 GH32 enzymes, with strong representation from Ascomycota (91.2 %) and key genera such as Fusarium, Aspergillus, and Penicillium. Exo-inulinases (43.9 %), endo-inulinases (33.4 %), and invertases (21.6 %) dominate the dataset. These enzymes share conserved domains (PF00251-PF08244), acidic pI values, and moderate hydrophobicity. A network similarity analysis revealed structural conservation among exo-inulinases. FUNIN includes an automated monthly update via InterPro API, ensuring current data. Publicly accessible at http://funindb.lbqc.org, FUNIN enables rapid data retrieval and supports the development of optimized enzyme cocktails for Agave-based bioethanol production.

RevDate: 2025-06-08
CmpDate: 2025-06-06

Khan AA, Laghari AA, Alroobaea R, et al (2025)

A lightweight scalable hybrid authentication framework for Internet of Medical Things (IoMT) using blockchain hyperledger consortium network with edge computing.

Scientific reports, 15(1):19856.

The Internet of Things (IoMT) has revolutionized the global landscape by enabling the hierarchy of interconnectivity between medical devices, sensors, and healthcare applications. Significant limitations in terms of scalability, privacy, and security are associated with this connection. This study presents a scalable, lightweight hybrid authentication system that integrates blockchain and edge computing within a Hyperledger Consortium network to address such real-time problems, particularly the use of Hyperledger Indy. For secure authentication, Hyperledger ensures a permissioned, decentralized, and impenetrable environment, while edge computing lowers latency by processing data closer to IoMT devices. The proposed framework balances security and computing performance by utilizing a hybrid cryptographic technique, like NuCypher Threshold Proxy Re-Encryption. Applicational activities are now appropriate for IoMT devices with limited resources thanks to this integration. By facilitating cooperation between numerous stakeholders with restricted access, the consortium network improves scalability and data governance. Comparing the proposed framework to the state-of-the-art techniques, experimental evaluation shows that it reduces latency by 2.93% and increases authentication efficiency by 98.33%. Therefore, in contrast to current solutions, guarantee data integrity and transparency between patients, consultants, and hospitals. The development of dependable, scalable, and secure IoMT applications is facilitated by this work, enabling next-generation medical applications.

RevDate: 2025-06-04

Zafar I, Unar A, Khan NU, et al (2025)

Molecular biology in the exabyte era: Taming the data deluge for biological revelation and clinical transformation.

Computational biology and chemistry, 119:108535 pii:S1476-9271(25)00195-1 [Epub ahead of print].

The explosive growth in next-generation high-throughput technologies has driven modern molecular biology into the exabyte era, producing an unparalleled volume of biological data across genomics, proteomics, metabolomics, and biomedical imaging. Although this massive expansion of data can power future biological discoveries and precision medicine, it presents considerable challenges, including computational bottlenecks, fragmented data landscapes, and ethical issues related to privacy and accessibility. We highlight novel contributions, such as the application of blockchain technologies to ensure data integrity and traceability, a relatively underexplored solution in this context. We describe how artificial intelligence (AI), machine learning (ML), and cloud computing fundamentally reshape and provide scalable solutions for these challenges by enabling near real-time pattern recognition, predictive modelling, and integrated data analysis. In particular, the use of federated learning models allows privacy-preserving collaboration across institutions. We emphasise the importance of open science, FAIR principles (Findable, Accessible, Interoperable, and Reusable), and blockchain-based audit trails to enhance global collaboration, reproducibility, and data security. By processing multi-omics datasets in integrated formats, we can enhance our understanding of disease mechanisms, facilitate biomarker discovery, and develop AI-assisted, personalised therapeutics. Addressing these technical and ethical demands requires robust governance frameworks that protect sensitive data without hindering innovation. This paper underscores a shift toward more secure, transparent, and collaborative biomedical research, marking a decisive step toward clinical transformation.

RevDate: 2025-06-09
CmpDate: 2025-06-04

Alzahrani N (2025)

Security importance of edge-IoT ecosystem: An ECC-based authentication scheme.

PloS one, 20(6):e0322131.

Despite the many outstanding benefits of cloud computing, such as flexibility, accessibility, efficiency, and cost savings, it still suffers from potential data loss, security concerns, limited control, and availability issues. The experts introduced the edge computing paradigm to perform better than cloud computing for the mentioned issues and challenges because it is directly connected to the Internet-of-Things (IoT), sensors, and wearables in a decentralized manner to distribute processing power closer to the data source, rather than relying on a central cloud server to handle all computations; this allows for faster data processing and reduced latency by processing data locally at the 'edge' of the network where it's generated. However, due to the resource-constrained nature of IoT, sensors, or wearable devices, the edge computing paradigm endured numerous data breaches due to sensitive data proximity, physical tampering vulnerabilities, and privacy concerns related to user-near data collection, and challenges in managing security across a large number of edge devices. Existing authentication schemes didn't fulfill the security needs of the edge computing paradigm; they either have design flaws, are susceptible to various known threats-such as impersonation, insider attacks, denial of service (DoS), and replay attacks-or experience inadequate performance due to reliance on resource-intensive cryptographic algorithms, like modular exponentiations. Given the pressing need for robust security mechanisms in such a dynamic and vulnerable edge-IoT ecosystem, this article proposes an ECC-based robust authentication scheme for such a resource-constrained IoT to address all known vulnerabilities and counter each identified threat. The proof of correctness of the proposed protocol has been scrutinized through a well-known and widely used Real-Or-Random (RoR) model, ProVerif validation, and attacks' discussion, demonstrating the thoroughness of the proposed protocol. The performance metrics have been measured by considering computational time complexity, communication cost, and storage overheads, further reinforcing the confidence in the proposed solution. The comparative analysis results demonstrated that the proposed ECC-based authentication protocol is 90.05% better in terms of computation cost, 62.41% communication cost, and consumes 67.42% less energy compared to state-of-the-art schemes. Therefore, the proposed protocol can be recommended for practical implementation in the real-world edge-IoT ecosystem.

RevDate: 2025-06-06

Alharbe NR (2025)

Fuzzy clustering based scheduling algorithm for minimizing the tasks completion time in cloud computing environment.

Scientific reports, 15(1):19505 pii:10.1038/s41598-025-02654-z.

This paper explores the complexity of project planning in a cloud computing environment and recognizes the challenges associated with distributed resources, heterogeneity, and dynamic changes in workloads. This research introduces a fresh approach to planning cloud resources more effectively by utilizing fuzzy waterfall techniques. The goal is to make better use of resources while cutting down on scheduling costs. By categorizing resources based on their characteristics, this method aims to lower search costs during project planning and speed up the resource selection process. The paper presents the Budget and Time Constrained Heterogeneous Early Completion (BDHEFT) technique, which is an enhanced version of HEFT tailored to meet specific user requirements, such as budget constraints and execution timelines. With its focus on fuzzy resource allocation that considers task composition and priority, BDHEFT streamlines the project schedule, ultimately reducing both execution time and costs. The algorithm design and mathematical modeling discussed in this study lay a strong foundation for boosting task scheduling efficiency in cloud computing environments, which provides a broad perspective to improve the overall system performance and meet user quality requirements.

RevDate: 2025-06-03

Shi X, S Geng (2025)

Double-edged sword? Heterogeneous effects of digital technology on environmental regulation-driven green transformation.

Journal of environmental management, 389:125960 pii:S0301-4797(25)01936-X [Epub ahead of print].

In the context of China's dual carbon goal, enterprises' green transformation is a key path to advancing the nation's high-quality economic development. A majority of existing studies have regarded digital technology as a homogeneous variable, and the heterogeneous impact of various technologies have not been sufficiently explored. Therefore, based on Chinese enterprises' data from 2012 to 2022, this study systematically examines the influence of environmental regulations (ETS) on enterprises' green transformation (GT) from the perspective of digital empowerment by employing difference-in-differences and threshold regression models. The findings reveal that digital transformation (DT) enhances the influence of environmental regulation by strengthening cost and innovation compensation effects. Further analysis indicates that different digital technologies have significant double-edged sword characteristics, wherein artificial intelligence negatively regulates both mechanisms, reflecting a lack of technological adaptability; cloud computing significantly enhances the positive impact of environmental regulation, reflecting its technological maturity; and big data technologies only positively regulate the innovation compensation effect, reflecting the enterprises' application preference. In addition, the combination of digital technologies does not create synergies, indicating firms' challenges in terms of absorptive capacity and organizational change. This study expands the theoretical research on environmental regulation and green transformation and provides a valuable reference for the government to develop targeted policies and enterprises to optimize the path of green transformation.

RevDate: 2025-06-05

Jiang W, Liu C, Liu W, et al (2025)

Advancements in Intelligent Sensing Technologies for Food Safety Detection.

Research (Washington, D.C.), 8:0713.

As a critical global public health concern, food safety has prompted substantial strategic advancements in detection technologies to safeguard human health. Integrated intelligent sensing systems, incorporating advanced information perception and computational intelligence, have emerged as rapid, user-friendly, and cost-effective solutions through the synergy of multisource sensors and smart computing. This review systematically examines the fundamental principles of intelligent sensing technologies, including optical, electrochemical, machine olfaction, and machine gustatory systems, along with their practical applications in detecting microbial, chemical, and physical hazards in food products. The review analyzes the current state and future development trends of intelligent perception from 3 core aspects: sensing technology, signal processing, and modeling algorithms. Driven by technologies such as machine learning and blockchain, intelligent sensing technology can ensure food safety throughout all stages of food processing, storage, and transportation, and provide support for the traceability and authenticity identification of food. It also presents current challenges and development trends associated with intelligent sensing technologies in food safety, including novel sensing materials, edge-cloud computing frameworks, and the co-design of energy-efficient algorithms with hardware architectures. Overall, by addressing current limitations and harnessing emerging innovations, intelligent sensing technologies are poised to establish a more resilient, transparent, and proactive framework for safeguarding food safety across global supply chains.

RevDate: 2025-06-05

Hu T, Shen P, Zhang Y, et al (2025)

OpenPheno: an open-access, user-friendly, and smartphone-based software platform for instant plant phenotyping.

Plant methods, 21(1):76.

BACKGROUND: Plant phenotyping has become increasingly important for advancing plant science, agriculture, and biotechnology. Classic manual methods are labor-intensive and time-consuming, while existing computational tools often require advanced coding skills, high-performance hardware, or PC-based environments, making them inaccessible to non-experts, to resource-constrained users, and to field technicians.

RESULTS: To respond to these challenges, we introduce OpenPheno, an open-access, user-friendly, and smartphone-based platform encapsulated within a WeChat Mini-Program for instant plant phenotyping. The platform is designed for ease of use, enabling users to phenotype plant traits quickly and efficiently with only a smartphone at hand. We currently instantiate the use of the platform with tools such as SeedPheno, WheatHeadPheno, LeafAnglePheno, SpikeletPheno, CanopyPheno, TomatoPheno, and CornPheno; each offering specific functionalities such as seed size and count analysis, wheat head detection, leaf angle measurement, spikelet counting, canopy structure analysis, and tomato fruit measurement. In particular, OpenPheno allows developers to contribute new algorithmic tools, further expanding its capabilities to continuously facilitate the plant phenotyping community.

CONCLUSIONS: By leveraging cloud computing and a widely accessible interface, OpenPheno democratizes plant phenotyping, making advanced tools available to a broader audience, including plant scientists, breeders, and even amateurs. It can function as a role in AI-driven breeding by providing the necessary data for genotype-phenotype analysis, thereby accelerating breeding programs. Its integration with smartphones also positions OpenPheno as a powerful tool in the growing field of mobile-based agricultural technologies, paving the way for more efficient, scalable, and accessible agricultural research and breeding.

RevDate: 2025-06-05

Singh AR, Sujatha MS, Kadu AD, et al (2025)

A deep learning and IoT-driven framework for real-time adaptive resource allocation and grid optimization in smart energy systems.

Scientific reports, 15(1):19309.

The rapid evolution of smart grids, driven by rising global energy demand and renewable energy integration, calls for intelligent, adaptive, and energy-efficient resource allocation strategies. Traditional energy management methods, based on static models or heuristic algorithms, often fail to handle real-time grid dynamics, leading to suboptimal energy distribution, high operational costs, and significant energy wastage. To overcome these challenges, this paper presents ORA-DL (Optimized Resource Allocation using Deep Learning) an advanced framework that integrates deep learning, Internet of Things (IoT)-based sensing, and real-time adaptive control to optimize smart grid energy management. ORA-DL employs deep neural networks, reinforcement learning, and multi-agent decision-making to accurately predict energy demand, allocate resources efficiently, and enhance grid stability. The framework leverages both historical and real-time data for proactive power flow management, while IoT-enabled sensors ensure continuous monitoring and low-latency response through edge and cloud computing infrastructure. Experimental results validate the effectiveness of ORA-DL, achieving 93.38% energy demand prediction accuracy, improving grid stability to 96.25%, and reducing energy wastage to 12.96%. Furthermore, ORA-DL enhances resource distribution efficiency by 15.22% and reduces operational costs by 22.96%, significantly outperforming conventional techniques. These performance gains are driven by real-time analytics, predictive modelling, and adaptive resource modulation. By combining AI-driven decision-making, IoT sensing, and adaptive learning, ORA-DL establishes a scalable, resilient, and sustainable energy management solution. The framework also provides a foundation for future advancements, including integration with edge computing, cybersecurity measures, and reinforcement learning enhancements, marking a significant step forward in smart grid optimization.

RevDate: 2025-06-11
CmpDate: 2025-06-01

Czech E, Tyler W, White T, et al (2025)

Analysis-ready VCF at Biobank scale using Zarr.

GigaScience, 14:.

BACKGROUND: Variant Call Format (VCF) is the standard file format for interchanging genetic variation data and associated quality control metrics. The usual row-wise encoding of the VCF data model (either as text or packed binary) emphasizes efficient retrieval of all data for a given variant, but accessing data on a field or sample basis is inefficient. The Biobank-scale datasets currently available consist of hundreds of thousands of whole genomes and hundreds of terabytes of compressed VCF. Row-wise data storage is fundamentally unsuitable and a more scalable approach is needed.

RESULTS: Zarr is a format for storing multidimensional data that is widely used across the sciences, and is ideally suited to massively parallel processing. We present the VCF Zarr specification, an encoding of the VCF data model using Zarr, along with fundamental software infrastructure for efficient and reliable conversion at scale. We show how this format is far more efficient than standard VCF-based approaches, and competitive with specialized methods for storing genotype data in terms of compression ratios and single-threaded calculation performance. We present case studies on subsets of 3 large human datasets (Genomics England: $n$=78,195; Our Future Health: $n$=651,050; All of Us: $n$=245,394) along with whole genome datasets for Norway Spruce ($n$=1,063) and SARS-CoV-2 ($n$=4,484,157). We demonstrate the potential for VCF Zarr to enable a new generation of high-performance and cost-effective applications via illustrative examples using cloud computing and GPUs.

CONCLUSIONS: Large row-encoded VCF files are a major bottleneck for current research, and storing and processing these files incurs a substantial cost. The VCF Zarr specification, building on widely used, open-source technologies, has the potential to greatly reduce these costs, and may enable a diverse ecosystem of next-generation tools for analysing genetic variation data directly from cloud-based object stores, while maintaining compatibility with existing file-oriented workflows.

RevDate: 2025-06-03
CmpDate: 2025-06-01

Prasad VK, Dansana D, Patro SGK, et al (2025)

IoT-based bed and ventilator management system during the COVID-19 pandemic.

Scientific reports, 15(1):19163.

The COVID-19 outbreak put a significant pressure on limited healthcare resources. The specific number of people that may be affected in the near future is difficult to determine. We can therefore deduce that the corona virus pandemic's healthcare requirements surpassed available capacity. The Internet of Things (IoT) has emerged an crucial concept for the advancement of information and communication technology. Since IoT devices are used in various medical fields like real-time tracking, patient data management, and healthcare management. Patients can be tracked using a variety of tiny-powered and lightweight wireless sensor nodes which use the body sensor network (BSN) technology, one of the key technologies of IoT advances in healthcare. This gives clinicians and patients more options in contemporary healthcare management. This study report focuses on the conditions for vacating beds available for COVID-19 patients. The patient's health condition is recognized and categorised as positive or negative in terms of the Coronavirus disease (COVID-19) using IoT sensors. The proposed model presented in this paper uses the ARIMA model and Transformer model to train a dataset with the aim of providing enhanced prediction. The physical implementation of these models is expected to accelerate the process of patient admission and the provision of emergency services, as the predicted patient influx data will be made available to the healthcare system in advance. This predictive capability of the proposed model contributes to the efficient management of healthcare resources. The research findings indicate that the proposed models demonstrate high accuracy, as evident by its low mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE).

RevDate: 2025-05-30

Biba B, BA O'Shea (2025)

Exploring Public Sentiments of Psychedelics Versus Other Substances: A Reddit-Based Natural Language Processing Study.

Journal of psychoactive drugs [Epub ahead of print].

New methods that capture the public's perception of controversial topics may be valuable. This study investigates public sentiments toward psychedelics and other substances through analyzes of Reddit discussions, using Google's cloud-based Natural Language Processing (NLP) infrastructure. Our findings indicate that illicit substances such as heroin and methamphetamine are associated with highly negative general sentiments, whereas psychedelics like Psilocybin, LSD, and Ayahuasca generally evoke neutral to slightly positive sentiments. This study underscores the effectiveness and cost efficiency of NLP and machine learning models in understanding the public's perception of sensitive topics. The findings indicate that online public sentiment toward psychedelics may be growing in acceptance of their therapeutic potential. However, limitations include potential selection bias from the Reddit sample and challenges in accurately interpreting nuanced language using NLP. Future research should aim to diversify data sources and enhance NLP models to capture the full spectrum of public sentiment toward psychedelics. Our findings support the importance of ongoing research and public education to inform policy decisions and therapeutic applications of psychedelics.

RevDate: 2025-05-30

Michaelson D, Schreiber D, Heule MJH, et al (2025)

Producing Proofs of Unsatisfiability with Distributed Clause-Sharing SAT Solvers.

Journal of automated reasoning, 69(2):12.

Distributed clause-sharing SAT solvers can solve challenging problems hundreds of times faster than sequential SAT solvers by sharing derived information among multiple sequential solvers. Unlike sequential solvers, however, distributed solvers have not been able to produce proofs of unsatisfiability in a scalable manner, which limits their use in critical applications. In this work, we present a method to produce unsatisfiability proofs for distributed SAT solvers by combining the partial proofs produced by each sequential solver into a single, linear proof. We first describe a simple sequential algorithm and then present a fully distributed algorithm for proof composition, which is substantially more scalable and general than prior works. Our empirical evaluation with over 1500 solver threads shows that our distributed approach allows proof composition and checking within around 3 × its own (highly competitive) solving time.

RevDate: 2025-06-01

Wang N, Li Y, Li Y, et al (2025)

Fault-tolerant and mobility-aware loading via Markov chain in mobile cloud computing.

Scientific reports, 15(1):18844.

With the development of better communication networks and other related technologies, the IoT has become an integral part of modern IT. However, mobile devices' limited memory, computing power, and battery life pose significant challenges to their widespread use. As an alternate, mobile cloud computing (MCC) makes good use of cloud resources to boost mobile devices' storage and processing capabilities. This involves moving some program logic to the cloud, which improves performance and saves power. Techniques for mobility-aware offloading are necessary because device movement affects connection quality and network access. Depending on less-than-ideal mobility models, insufficient fault tolerance, inaccurate offloading, and poor task scheduling are just a few of the limitations that current mobility-aware offloading methods often face. Using fault-tolerant approaches and user mobility patterns defined by a Markov chain, this research introduces a novel decision-making framework for mobility-aware offloading. The evaluation findings show that compared to current approaches, the suggested method achieves execution speeds up to 77.35% faster and energy use down to 67.14%.

RevDate: 2025-05-31
CmpDate: 2025-05-29

De Oliveira El-Warrak L, C Miceli de Farias (2025)

TWINVAX: conceptual model of a digital twin for immunisation services in primary health care.

Frontiers in public health, 13:1568123.

INTRODUCTION: This paper presents a proposal for the modelling and reference architecture of a digital twin for immunisation services in primary health care centres. The system leverages Industry 4.0 concepts and technologies, such as the Internet of Things (IoT), machine learning, and cloud computing, to improve vaccination management and monitoring.

METHODS: The modelling was conducted using the Unified Modelling Language (UML) to define workflows and processes such as temperature monitoring of storage equipment and tracking of vaccination status. The proposed reference architecture follows the ISO 23247 standard and is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain.

RESULTS: The system enables the storage, monitoring, and visualisation of data related to the immunisation room, specifically concerning the temperature control of ice-lined refrigerators (ILRs) and thermal boxes. An analytic module has been developed to monitor vaccination coverage, correlating individual vaccination statuses with the official vaccination calendar.

DISCUSSION: The proposed digital twin improves vaccine temperature management, reduces vaccine dose wastage, monitors the population's vaccination status, and supports the planning of more effective immunisation actions. The article also discusses the feasibility, potential benefits, and future impacts of deploying this technology within immunisation services.

RevDate: 2025-05-31

Aleisa MA (2025)

Enhancing Security in CPS Industry 5.0 using Lightweight MobileNetV3 with Adaptive Optimization Technique.

Scientific reports, 15(1):18677.

Advanced Cyber-Physical Systems (CPS) that facilitate seamless communication between humans, machines, and objects are revolutionizing industrial automation as part of Industry 5.0, which is being driven by technologies such as IIoT, cloud computing, and artificial intelligence. In addition to providing flexible, individualized production processes, this growth brings with it fresh cybersecurity risks including Distributed Denial of Service (DDoS) attacks. This research suggests a deep learning-based approach designed to enhance security in CPS to address these issues. The system's primary goal is to identify and stop advanced cyberattacks. The strategy guarantees strong protection for industrial processes in a networked, intelligent environment. This study offers a sophisticated paradigm for improving Cyber-Physical Systems (CPS) security in Industry 5.0 by combining effective data preprocessing, thin edge computing, and strong encryption methods. The method starts with preprocessing the IoT23 dataset, which includes utilizing Gaussian filters to reduce noise, Mean Imputation to handle missing values, and Min-Max normalization to data scaling. The model uses flow-based, time-based, statistical, and deep feature extraction using ResNet-101 for feature extraction. Computational efficiency is maximized through the implementation of MobileNetV3, a thin convolutional neural network optimized for mobile and edge devices. The accuracy of the model is further improved by applying a Chaotic Tent-based Puma Optimization (CTPOA) technique. Finally, to ensure secure data transfer and protect private data in CPS settings, AES encryption is combined with discretionary access control. This comprehensive framework enables high performance, achieving 99.91% accuracy, and provides strong security for Industry 5.0 applications.

RevDate: 2025-05-31

Alkhalifa AK, Aljebreen M, Alanazi R, et al (2025)

Mitigating malicious denial of wallet attack using attribute reduction with deep learning approach for serverless computing on next generation applications.

Scientific reports, 15(1):18720.

Denial of Wallet (DoW) attacks are one kind of cyberattack whose goal is to develop and expand the financial sources of a group by causing extreme costs in their serverless computing or cloud environments. These threats are chiefly related to serverless structures owing to their features, such as auto-scaling, pay-as-you-go method, cost amplification, and limited control. Serverless computing, Function-as-a-Service (FaaS), is a cloud computing (CC) system that permits developers to construct and run applications without a conventional server substructure. The deep learning (DL) model, a part of the machine learning (ML) technique, has developed as an effectual device in cybersecurity, permitting more effectual recognition of anomalous behaviour and classifying patterns indicative of threats. This study proposes a Mitigating Malicious Denial of Wallet Attack using Attribute Reduction with Deep Learning (MMDoWA-ARDL) approach for serverless computing on next-generation applications. The primary purpose of the MMDoWA-ARDL approach is to propose a novel framework that effectively detects and mitigates malicious attacks in serverless environments using an advanced deep-learning model. Initially, the presented MMDoWA-ARDL model applies data pre-processing using Z-score normalization to transform input data into a valid format. Furthermore, the feature selection process-based cuckoo search optimization (CSO) model efficiently identifies the most impactful attributes related to potential malicious activity. For the DoW attack mitigation process, the bi-directional long short-term memory multi-head self-attention network (BMNet) method is employed. Finally, the hyperparameter tuning is accomplished by implementing the secretary bird optimizer algorithm (SBOA) method to enhance the classification outcomes of the BMNet model. A wide-ranging experimental investigation uses a benchmark dataset to exhibit the superior performance of the proposed MMDoWA-ARDL technique. The comparison study of the MMDoWA-ARDL model portrayed a superior accuracy value of 99.39% over existing techniques.

RevDate: 2025-05-31

Heydari S, QH Mahmoud (2025)

Tiny Machine Learning and On-Device Inference: A Survey of Applications, Challenges, and Future Directions.

Sensors (Basel, Switzerland), 25(10):.

The growth in artificial intelligence and its applications has led to increased data processing and inference requirements. Traditional cloud-based inference solutions are often used but may prove inadequate for applications requiring near-instantaneous response times. This review examines Tiny Machine Learning, also known as TinyML, as an alternative to cloud-based inference. The review focuses on applications where transmission delays make traditional Internet of Things (IoT) approaches impractical, thus necessitating a solution that uses TinyML and on-device inference. This study, which follows the PRISMA guidelines, covers TinyML's use cases for real-world applications by analyzing experimental studies and synthesizing current research on the characteristics of TinyML experiments, such as machine learning techniques and the hardware used for experiments. This review identifies existing gaps in research as well as the means to address these gaps. The review findings suggest that TinyML has a strong record of real-world usability and offers advantages over cloud-based inference, particularly in environments with bandwidth constraints and use cases that require rapid response times. This review discusses the implications of TinyML's experimental performance for future research on TinyML applications.

RevDate: 2025-05-31

Jin W, A Rezaeipanah (2025)

Dynamic task allocation in fog computing using enhanced fuzzy logic approaches.

Scientific reports, 15(1):18513.

Fog computing extends cloud services to the edge of the network, enabling low-latency processing and improved resource utilization, which are crucial for real-time Internet of Things (IoT) applications. However, efficient task allocation remains a significant challenge due to the dynamic and heterogeneous nature of fog environments. Traditional task scheduling methods often fail to manage uncertainty in task requirements and resource availability, leading to suboptimal performance. In this paper, we propose a novel approach, DTA-FLE (Dynamic Task Allocation in Fog computing using a Fuzzy Logic Enhanced approach), which leverages fuzzy logic to handle the inherent uncertainty in task scheduling. Our method dynamically adapts to changing network conditions, optimizing task allocation to improve efficiency, reduce latency, and enhance overall system performance. Unlike conventional approaches, DTA-FLE introduces a novel hierarchical scheduling mechanism that dynamically adapts to real-time network conditions using fuzzy logic, ensuring optimal task allocation and improved system responsiveness. Through simulations using the iFogSim framework, we demonstrate that DTA-FLE outperforms conventional techniques in terms of execution time, resource utilization, and responsiveness, making it particularly suitable for real-time IoT applications within hierarchical fog-cloud architectures.

RevDate: 2025-05-31

Ruambo FA, Masanga EE, Lufyagila B, et al (2025)

Brute-force attack mitigation on remote access services via software-defined perimeter.

Scientific reports, 15(1):18599.

Remote Access Services (RAS)-including protocols such as Remote Desktop Protocol (RDP), Secure Shell (SSH), Virtual Network Computing (VNC), Telnet, File Transfer Protocol (FTP), and Secure File Transfer Protocol (SFTP)-are essential to modern network infrastructures, particularly with the rise of remote work and cloud adoption. However, their exposure significantly increases the risk of brute-force attacks (BFA), where adversaries systematically guess credentials to gain unauthorized access. Traditional defenses like IP blocklisting and multifactor authentication (MFA) often struggle with scalability and adaptability to distributed attacks. This study introduces a zero-trust-aligned Software-Defined Perimeter (SDP) architecture that integrates Single Packet Authorization (SPA) for service cloaking and Connection Tracking (ConnTrack) for real-time session analysis. A Docker-based prototype was developed and tested, demonstrating no successful BFA attempts observed, latency reduction by above 10% across all evaluated RAS protocols, and the system CPU utilization reduction by 48.7% under attack conditions without impacting normal throughput. It also proved effective against connection-oriented attacks, including port scanning and distributed denial of service (DDoS) attacks. The proposed architecture offers a scalable and efficient security framework by embedding proactive defense at the authentication layer. This work advances zero-trust implementations and delivers practical, low-overhead protection for securing RAS against evolving cyber threats.

RevDate: 2025-06-04
CmpDate: 2025-05-26

Marini S, Barquero A, Wadhwani AA, et al (2024)

OCTOPUS: Disk-based, Multiplatform, Mobile-friendly Metagenomics Classifier.

AMIA ... Annual Symposium proceedings. AMIA Symposium, 2024:798-807.

Portable genomic sequencers such as Oxford Nanopore's MinION enable real-time applications in clinical and environmental health. However, there is a bottleneck in the downstream analytics when bioinformatics pipelines are unavailable, e.g., when cloud processing is unreachable due to absence of Internet connection, or only low-end computing devices can be carried on site. Here we present a platform-friendly software for portable metagenomic analysis of Nanopore data, the Oligomer-based Classifier of Taxonomic Operational and Pan-genome Units via Singletons (OCTOPUS). OCTOPUS is written in Java, reimplements several features of the popular Kraken2 and KrakenUniq software, with original components for improving metagenomics classification on incomplete/sampled reference databases, making it ideal for running on smartphones or tablets. OCTOPUS obtains sensitivity and precision comparable to Kraken2, while dramatically decreasing (4- to 16-fold) the false positive rate, and yielding high correlation on real-word data. OCTOPUS is available along with customized databases at https://github.com/DataIntellSystLab/OCTOPUS and https://github.com/Ruiz-HCI-Lab/OctopusMobile.

RevDate: 2025-06-13

Adams MCB, Hudson C, Chen W, et al (2025)

Automated multi-instance REDCap data synchronization for NIH clinical trial networks.

JAMIA open, 8(3):ooaf036.

OBJECTIVES: The main goal is to develop an automated process for connecting Research Electronic Data Capture (REDCap) instances in a clinical trial network to allow for deidentified transfer of research surveys to cloud computing data commons for discovery.

MATERIALS AND METHODS: To automate the process of consolidating data from remote clinical trial sites into 1 dataset at the coordinating/storage site, we developed a Hypertext Preprocessor script that operates in tandem with a server-side scheduling system (eg, Cron) to set up practical data extraction schedules for each remote site.

RESULTS: The REDCap Application Programming Interface (API) Connection provides a novel implementation for automated synchronization between multiple REDCap instances across a distributed clinical trial network, enabling secure and efficient data transfer between study sites and coordination centers. Additionally, the protocol checker allows for automated reporting on conforming to planned data library protocols.

DISCUSSION: Working from a shared and accepted core library of REDCap surveys was critical to the success of this implementation. This model also facilitates Institutional Review Board (IRB) approvals because the coordinating center can designate which surveys and data elements to be transferred. Hence, protected health information can be transformed or withheld depending on the permission given by the IRB at the coordinating center level. For the NIH HEAL clinical trial networks, this unified data collection works toward the goal of creating a deidentified dataset for transfer to a Gen3 data commons.

CONCLUSION: We established several simple and research-relevant tools, REDCAP API Connection and REDCAP Protocol Check, to support the emerging needs of clinical trial networks with increased data harmonization complexity.

RevDate: 2025-05-27

Umezawa A, Nakamura K, Kasahara M, et al (2025)

Innovative Artificial Intelligence System in the Children's Hospital in Japan.

JMA journal, 8(2):354-360.

The evolution of innovative artificial intelligence (AI) systems in pediatric hospitals in Japan promises benefits for patients and healthcare providers. We actively contribute to advancements in groundbreaking medical treatments by leveraging deep learning technology and using vast medical datasets. Our team of data scientists closely collaborates with departments within the hospital. Our research themes based on deep learning are wide-ranging, including acceleration of pathological diagnosis using image data, distinguishing of bacterial species, early detection of eye diseases, and prediction of genetic disorders from physical features. Furthermore, we implement Information and Communication Technology to diagnose pediatric cancer. Moreover, we predict immune responses based on genomic data and diagnose autism by quantifying behavior and communication. Our expertise extends beyond research to provide comprehensive AI development services, including data collection, annotation, high-speed computing, utilization of machine learning frameworks, design of web services, and containerization. In addition, as active members of medical AI platform collaboration partnerships, we provide unique data and analytical technologies to facilitate the development of AI development platforms. Furthermore, we address the challenges of securing medical data in the cloud to ensure compliance with stringent confidentiality standards. We will discuss AI's advancements in pediatric hospitals and their challenges.

RevDate: 2025-05-30

Luo Q, Lan C, Yu T, et al (2025)

Federated learning-based non-intrusive load monitoring adaptive to real-world heterogeneities.

Scientific reports, 15(1):18223.

Non-intrusive load monitoring (NILM) is a key way to cost-effectively acquire appliance-level information in advanced metering infrastructure (AMI). Recently, federated learning has enabled NILM to learn from decentralized meter data while preserving privacy. However, as real-world heterogeneities in electricity consumption data, local models, and AMI facilities cannot be eliminated in advance, federated learning-based NILM (FL-NILM) may underperform or even fail. Therefore, we propose a FL-NILM method adaptive to these heterogeneities. To fully leverage diverse electricity consumption data, dynamic clustering is integrated into cloud aggregation to hierarchically mitigate the global-local bias in knowledge required for NILM. Meanwhile, adaptive model initialization is applied in local training to balance biased global knowledge with local accumulated knowledge, enhancing the learning of heterogeneous data. To further handle heterogeneous local NILM models, homogeneous proxy models are used for global-local iteration through knowledge distillation. In addition, a weighted aggregation mechanism with a cache pool is designed for adapting to asynchronous iteration caused by heterogeneous AMI facilities. Experiments on public datasets show that the proposed method outperforms existing methods in both synchronous and asynchronous settings. The proposed method's advantages in computing and communication complexity are also discussed.

RevDate: 2025-05-25
CmpDate: 2025-05-23

Ayid YM, Fouad Y, Kaddes M, et al (2025)

An intelligent framework for crop health surveillance and disease management.

PloS one, 20(5):e0324347.

The agricultural sector faces critical challenges, including significant crop losses due to undetected plant diseases, inefficient monitoring systems, and delays in disease management, all of which threaten food security worldwide. Traditional approaches to disease detection are often labor-intensive, time-consuming, and prone to errors, making early intervention difficult. This paper proposes an intelligent framework for automated crop health monitoring and early disease detection to overcome these limitations. The system leverages deep learning, cloud computing, embedded devices, and the Internet of Things (IoT) to provide real-time insights into plant health over large agricultural areas. The primary goal is to enhance early detection accuracy and recommend effective disease management strategies, including crop rotation and targeted treatment. Additionally, environmental parameters such as temperature, humidity, and water levels are continuously monitored to aid in informed decision-making. The proposed framework incorporates Convolutional Neural Network (CNN), MobileNet-1, MobileNet-2, Residual Network (ResNet-50), and ResNet-50 with InceptionV3 to ensure precise disease identification and improved agricultural productivity.

RevDate: 2025-05-23

Zhang X, Lyu Z, Wang Y, et al (2025)

A Joint Geometric Topological Analysis Network (JGTA-Net) for Detecting and Segmenting Intracranial Aneurysms.

IEEE transactions on bio-medical engineering, PP: [Epub ahead of print].

OBJECTIVE: The rupture of intracranial aneurysms leads to subarachnoid hemorrhage. Detecting intracranial aneurysms before rupture and stratifying their risk is critical in guiding preventive measures. Point-based aneurysm segmentation provides a plausible pathway for automatic aneurysm detection. However, challenges in existing segmentation methods motivate the proposed work.

METHODS: We propose a dual-branch network model (JGTANet) for accurately detecting aneurysms. JGTA-Net employs a hierarchical geometric feature learning framework to extract local contextual geometric information from the point cloud representing intracranial vessels. Building on this, we integrated a topological analysis module that leverages persistent homology to capture complex structural details of 3D objects, filtering out short-lived noise to enhance the overall topological invariance of the aneurysms. Moreover, we refined the segmentation output by quantitatively computing multi-scale topological features and introducing a topological loss function to preserve the correct topological relationships better. Finally, we designed a feature fusion module that integrates information extracted from different modalities and receptive fields, enabling effective multi-source information fusion.

RESULTS: Experiments conducted on the IntrA dataset demonstrated the superiority of the proposed network model, yielding state-of-the-art segmentation results (e.g., Dice and IOU are approximately 0.95 and 0.90, respectively). Our IntrA results were confirmed by testing on two independent datasets: One with comparable lengths to the IntrA dataset and the other with longer and more complex vessels.

CONCLUSIONS: The proposed JGTA-Net model outperformed other recently published methods (> 10% in DSC and IOU), showing our model's strong generalization capabilities.

SIGNIFICANCE: The proposed work can be integrated into a large deep-learning-based system for assessing brain aneurysms in the clinical workflow.

RevDate: 2025-06-03

Zhao T, Low B, Shen Q, et al (2025)

Exposome-Scale Investigation of Cl-/Br-Containing Chemicals Using High-Resolution Mass Spectrometry, Multistage Machine Learning, and Cloud Computing.

Analytical chemistry, 97(21):11099-11109.

Over 70% of organic halogens, representing chlorine- and bromine-containing disinfection byproducts (Cl-/Br-DBPs), remain unidentified after 50 years of research. This work introduces a streamlined and cloud-based exposomics workflow that integrates high-resolution mass spectrometry (HRMS) analysis, multistage machine learning, and cloud computing for efficient analysis and characterization of Cl-/Br-DBPs. In particular, the multistage machine learning structure employs progressively different heavy isotopic peaks at each layer and capture the distinct isotopic characteristics of nonhalogenated compounds and Cl-/Br-compounds at different halogenation levels. This innovative approach enables the recognition of 22 types of Cl-/Br-compounds with up to 6 Br and 8 Cl atoms. To address the data imbalance among different classes, particularly the limited number of heavily chlorinated and brominated compounds, data perturbation is performed to generate hypothetical/synthetic molecular formulas containing multiple Cl and Br atoms, facilitating data augmentation. To further benefit the environmental chemistry community with limited computational experience and hardware access, above innovations are incorporated into HalogenFinder (http://www.halogenfinder.com/), a user-friendly, web-based platform for Cl-/Br-compound characterization, with statistical analysis support via MetaboAnalyst. In the benchmarking, HalogenFinder outperformed two established tools, achieving a higher recognition rate for 277 authentic Cl-/Br-compounds and uniquely identifying the number of Cl/Br atoms. In laboratory tests of DBP mixtures, it identified 72 Cl-/Br-DBPs with proposed structures, of which eight were confirmed with chemical standards. A retrospective analysis of 2022 finished water HRMS data revealed insightful temporal trends in Cl-DBP features. These results demonstrate HalogenFinder's effectiveness in advancing Cl-/Br-compound identification for environmental science and exposomics.

RevDate: 2025-05-19

Ngo AT, Heng CS, Chattopadhyay N, et al (2025)

Persistence of Backdoor-Based Watermarks for Neural Networks: A Comprehensive Evaluation.

IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].

Deep neural networks (DNNs) have gained considerable traction in recent years due to the unparalleled results they gathered. However, the cost behind training such sophisticated models is resource-intensive, resulting in many to consider DNNs to be intellectual property (IP) to model owners. In this era of cloud computing, high-performance DNNs are often deployed all over the Internet so that people can access them publicly. As such, DNN watermarking schemes, especially backdoor-based watermarks, have been actively developed in recent years to preserve proprietary rights. Nonetheless, there lies much uncertainty on the robustness of existing backdoor watermark schemes, toward both adversarial attacks and unintended means such as fine-tuning neural network models. One reason for this is that no complete guarantee of robustness can be assured in the context of backdoor-based watermark. In this article, we extensively evaluate the persistence of recent backdoor-based watermarks within neural networks in the scenario of fine-tuning, and we propose/develop a novel data-driven idea to restore watermark after fine-tuning without exposing the trigger set. Our empirical results show that by solely introducing training data after fine-tuning, the watermark can be restored if model parameters do not shift dramatically during fine-tuning. Depending on the types of trigger samples used, trigger accuracy can be reinstated to up to 100%. This study further explores how the restoration process works using loss landscape visualization, as well as the idea of introducing training data in the fine-tuning stage to alleviate watermark vanishing.

RevDate: 2025-05-22

Ren Z, Zhang Z, Zhuge Y, et al (2025)

Near-Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides.

Nano-micro letters, 17(1):261.

The rise of large-scale artificial intelligence (AI) models, such as ChatGPT, DeepSeek, and autonomous vehicle systems, has significantly advanced the boundaries of AI, enabling highly complex tasks in natural language processing, image recognition, and real-time decision-making. However, these models demand immense computational power and are often centralized, relying on cloud-based architectures with inherent limitations in latency, privacy, and energy efficiency. To address these challenges and bring AI closer to real-world applications, such as wearable health monitoring, robotics, and immersive virtual environments, innovative hardware solutions are urgently needed. This work introduces a near-sensor edge computing (NSEC) system, built on a bilayer AlN/Si waveguide platform, to provide real-time, energy-efficient AI capabilities at the edge. Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction, coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations, the system represents a transformative approach to AI hardware design. Demonstrated through multimodal gesture and gait analysis, the NSEC system achieves high classification accuracies of 96.77% for gestures and 98.31% for gaits, ultra-low latency (< 10 ns), and minimal energy consumption (< 0.34 pJ). This groundbreaking system bridges the gap between AI models and real-world applications, enabling efficient, privacy-preserving AI solutions for healthcare, robotics, and next-generation human-machine interfaces, marking a pivotal advancement in edge computing and AI deployment.

RevDate: 2025-05-21
CmpDate: 2025-05-19

McHugh CP, Clement MHS, M Phatak (2025)

AD Workbench: Transforming Alzheimer's research with secure, global, and collaborative data sharing and analysis.

Alzheimer's & dementia : the journal of the Alzheimer's Association, 21(5):e70278.

INTRODUCTION: The Alzheimer's Disease Data Initiative (AD Data Initiative) is a global coalition of partners accelerating scientific discoveries in Alzheimer's disease (AD) and related dementias (ADRD) by breaking down data silos, eliminating barriers to research, and fostering collaboration among scientists studying these issues.

METHODS: The flagship product of the AD Data Initiative technical suite is AD Workbench, a secure, cloud-based environment that enables global access, analysis, and sharing of datasets, as well as interoperability with other key data platforms.

RESULTS: As of April 7, 2025, AD Workbench has 6178 registered users from 115 countries, including 886 users from 60 low- and middle-income countries. On average, more than 500 users, including over 100 new users, log in each month to discover data and conduct integrative analyses.

DISCUSSION: By prioritizing interoperability and robust security within a collaborative framework, AD Workbench is well positioned to drive advancements in AD treatments and diagnostic tools.

HIGHLIGHTS: Data sharing Interoperability Cloud-based analytics Collaborative workspace.

RevDate: 2025-05-21
CmpDate: 2025-05-18

Sen S, Vairagare I, Gosai J, et al (2025)

RABiTPy: an open-source Python software for rapid, AI-powered bacterial tracking and analysis.

BMC bioinformatics, 26(1):127.

Bacterial tracking is crucial for understanding the mechanisms governing motility, chemotaxis, cell division, biofilm formation, and pathogenesis. Although modern microscopy and computing have enabled the collection of large datasets, many existing tools struggle with big data processing or with accurately detecting, segmenting, and tracking bacteria of various shapes. To address these issues, we developed RABiTPy, an open-source Python software pipeline that integrates traditional and artificial intelligence-based segmentation with tracking tools within a user-friendly framework. RABiTPy runs interactively in Jupyter notebooks and supports numerous image and video formats. Users can select from adaptive, automated thresholding, or AI-based segmentation methods, fine-tuning parameters to fit their needs. The software offers customizable parameters to enhance tracking efficiency, and its streamlined handling of large datasets offers an alternative to existing tracking software by emphasizing usability and modular integration. RABiTPy supports GPU and CPU processing as well as cloud computing. It offers comprehensive spatiotemporal analyses that includes trajectories, motile speeds, mean squared displacement, and turning angles-while providing a variety of visualization options. With its scalable and accessible platform, RABiTPy empowers researchers, even those with limited coding experience, to analyze bacterial physiology and behavior more effectively. By reducing technical barriers, this tool has the potential to accelerate discoveries in microbiology.

RevDate: 2025-05-17
CmpDate: 2025-05-17

Haddad T, Kumarapeli P, de Lusignan S, et al (2025)

A Sustainable Future in Digital Health: Leveraging Environmentally Friendly Architectural Tactics for Sustainable Data Processing.

Studies in health technology and informatics, 327:713-717.

The rapid growth of big data in healthcare necessitates optimising data processing to reduce its environmental impact. This paper proposes a pilot architectural framework to evaluate the sustainability of a Big Healthcare Data (BHD) system using Microservices Architecture (MSA). The goal is to enhance MSA's architectural tactics by incorporating environmentally friendly metrics into healthcare systems. This is achieved by adopting energy and carbon efficiency models, alongside exploring innovative architectural strategies. The framework, based on recent research, manipulates cloud-native system architecture by using a controller to adjust microservice deployment through real-time monitoring and modelling. This approach demonstrates how sustainability-driven metrics can be applied at different abstraction levels to estimate environmental impact from multiple perspectives.

RevDate: 2025-05-22

Radovanovic D, Zanforlin A, Smargiassi A, et al (2025)

CHEst PHysical Examination integrated with UltraSound - Phase (CHEPHEUS1). A survey of Accademia di Ecografia Toracica (AdET).

Multidisciplinary respiratory medicine, 20(1):.

BACKGROUND: Chest physical exam (CPE) is based on the four pillars of classical semiotics. However, CPE's sensitivity and specificity are low, and is affected by operators' skills. The aim of this work was to explore the contribution of chest ultrasound (US) to the traditional CPE.

METHODS: For this purpose, a survey was submitted to US users. They were asked to rate the usefulness of classical semiotics and chest US in evaluating each item of CPE pillars. The study was conducted and described according to the STROBE checklist. The study used the freely available online survey cloud-web application (Google Forms, Google Ireland Ltd, Mountain View, CA, USA).

RESULTS: The results showed a tendency to prefer chest US to palpation and percussion, suggesting a possible -future approach based on inspection, auscultation and palpatory ultrasound evaluation.

CONCLUSION: The results of our survey introduce, for the first time, the role of ultrasound as a pillar of physical examination. Our project CHEPHEUS has the aim to study and propose a new way of performing the physical exam in the future.

RevDate: 2025-05-14

Gulkesen KH, ET Sonuvar (2025)

Data Privacy in Medical Informatics and Electronic Health Records: A Bibliometric Analysis.

Health care analysis : HCA : journal of health philosophy and policy [Epub ahead of print].

This study aims to evaluate scientific publications on "Medical Informatics" and "Data Privacy" using a bibliometric approach to identify research trends, the most studied topics, and the countries and institutions with the highest publication output. The search was carried out utilizing the WoS Clarivate Analytics tool across SCIE journals. Subsequently, text mining, keyword clustering, and data visualization were applied through the use of VOSviewer and Tableau Desktop software. Between 1975 and 2023, a total of 7,165 articles were published on the topic of data privacy. The number of articles has been increasing each year. The text mining and clustering analysis identified eight main clusters in the literature: (1) Mobile Health/Telemedicine/IOT, (2) Security/Encryption/Authentication, (3) Big Data/AI/Data Science, (4) Anonymization/Digital Phenotyping, (5) Genomics/Biobank, (6) Ethics, (7) Legal Issues, (8) Cloud Computing. On a country basis, the United States was identified as the most active country in this field, producing the most publications and receiving the highest number of citations. China, the United Kingdom, Canada, and Australia also emerged as significant countries. Among these clusters, "Mobile Health/Telemedicine/IOT," "Security/Encryption/Authentication," and "Cloud Computing" technologies stood out as the most prominent and extensively studied topics in the intersection of medical informatics and data privacy.

RevDate: 2025-05-16

Hirsch M, Mateos C, TA Majchrzak (2025)

Exploring Smartphone-Based Edge AI Inferences Using Real Testbeds.

Sensors (Basel, Switzerland), 25(9):.

The increasing availability of lightweight pre-trained models and AI execution frameworks is causing edge AI to become ubiquitous. Particularly, deep learning (DL) models are being used in computer vision (CV) for performing object recognition and image classification tasks in various application domains requiring prompt inferences. Regarding edge AI task execution platforms, some approaches show a strong dependency on cloud resources to complement the computing power offered by local nodes. Other approaches distribute workload horizontally, i.e., by harnessing the power of nearby edge nodes. Many of these efforts experiment with real settings comprising SBC (Single-Board Computer)-like edge nodes only, but few of these consider nomadic hardware such as smartphones. Given the huge popularity of smartphones worldwide and the unlimited scenarios where smartphone clusters could be exploited for providing computing power, this paper sheds some light in answering the following question: Is smartphone-based edge AI a competitive approach for real-time CV inferences? To empirically answer this, we use three pre-trained DL models and eight heterogeneous edge nodes including five low/mid-end smartphones and three SBCs, and compare the performance achieved using workloads from three image stream processing scenarios. Experiments were run with the help of a toolset designed for reproducing battery-driven edge computing tests. We compared latency and energy efficiency achieved by using either several smartphone clusters testbeds or SBCs only. Additionally, for battery-driven settings, we include metrics to measure how workload execution impacts smartphone battery levels. As per the computing capability shown in our experiments, we conclude that edge AI based on smartphone clusters can help in providing valuable resources to contribute to the expansion of edge AI in application scenarios requiring real-time performance.

RevDate: 2025-05-16

Mo Y, Chen P, Zhou W, et al (2025)

Enhanced Cloud Detection Using a Unified Multimodal Data Fusion Approach in Remote Images.

Sensors (Basel, Switzerland), 25(9):.

Aiming at the complexity of network architecture design and the low computational efficiency caused by variations in the number of modalities in multimodal cloud detection tasks, this paper proposes an efficient and unified multimodal cloud detection model, M2Cloud, which can process any number of modal data. The core innovation of M2Cloud lies in its novel multimodal data fusion method. This method avoids architectural changes for new modalities, thereby significantly reducing incremental computing costs and enhancing overall efficiency. Furthermore, the designed multimodal data fusion module possesses strong generalization capabilities and can be seamlessly integrated into other network architectures in a plug-and-play manner, greatly enhancing the module's practicality and flexibility. To address the challenge of unified multimodal feature extraction, we adopt two key strategies: (1) constructing feature extraction modules with shared but independent weights for each modality to preserve the inherent features of each modality; (2) utilizing cosine similarity to adaptively learn complementary features between different modalities, thereby reducing redundant information. Experimental results demonstrate that M2Cloud achieves or even surpasses the state-of-the-art (SOTA) performance on the public multimodal datasets WHUS2-CD and WHUS2-CD+, verifying its effectiveness in the unified multimodal cloud detection task. The research presented in this paper offers new insights and technical support for the field of multimodal data fusion and cloud detection, and holds significant theoretical and practical value.

RevDate: 2025-05-16

Jilcha LA, Kim DH, J Kwak (2025)

Temporal Decay Loss for Adaptive Log Anomaly Detection in Cloud Environments.

Sensors (Basel, Switzerland), 25(9):.

Log anomaly detection in cloud computing environments is essential for maintaining system reliability and security. While sequence modeling architectures such as LSTMs and Transformers have been widely employed to capture temporal dependencies in log messages, their effectiveness deteriorates in zero-shot transfer scenarios due to distributional shifts in log structures, terminology, and event frequencies, as well as minimal token overlap across datasets. To address these challenges, we propose an effective detection approach integrating a domain-specific pre-trained language model (PLM) fine-tuned on cybersecurity-adjacent data with a novel loss function, Loss with Decaying Factor (LDF). LDF introduces an exponential time decay mechanism into the training objective, ensuring a dynamic balance between historical context and real-time relevance. Unlike traditional sequence models that often overemphasize outdated information and impose high computational overhead, LDF constrains the training process by dynamically weighing log messages based on their temporal proximity, thereby aligning with the rapidly evolving nature of cloud computing environments. Additionally, the domain-specific PLM mitigates semantic discrepancies by improving the representation of log data across heterogeneous datasets. Extensive empirical evaluations on two supercomputing log datasets demonstrate that this approach substantially enhances cross-dataset anomaly detection performance. The main contributions of this study include: (1) the introduction of a Loss with Decaying Factor (LDF) to dynamically balance historical context with real-time relevance; and (2) the integration of a domain-specific PLM for enhancing generalization in zero-shot log anomaly detection across heterogeneous cloud environments.

RevDate: 2025-05-17
CmpDate: 2025-05-14

R D, PK T S (2025)

Dual level dengue diagnosis using lightweight multilayer perceptron with XAI in fog computing environment and rule based inference.

Scientific reports, 15(1):16548.

Over the last fifty years, arboviral infections have made an unparalleled contribution to worldwide disability and morbidity. Globalization, population growth, and unplanned urbanization are the main causes. Dengue is regarded as the most significant arboviral illness among them due to its prior dominance in growth. The dengue virus is mostly transmitted to humans by Aedes mosquitoes. The human body infected with dengue virus (DenV) will experience certain adverse impacts. To keep the disease under control, some of the preventative measures implemented by different countries need to be updated. Manual diagnosis is typically employed, and the accuracy of the diagnosis is assessed based on the experience of the healthcare professionals. Because there are so many patients during an outbreak, incompetence also happens. Remote monitoring and massive data storage are required. Though cloud computing is one of the solutions, it has a significant latency, despite its potential for remote monitoring and storage. Also, the diagnosis should be made as quickly as possible. The aforementioned issue has been resolved with fog computing, which significantly lowers latency and facilitates remote diagnosis. This study especially focuses on incorporating machine learning and deep learning techniques in the fog computing environment to leverage the overall diagnostic efficiency of dengue by promoting remote diagnosis and speedy treatment. A dual-level dengue diagnosis framework has been proposed in this study. Level-1 diagnosis is based on the symptoms of the patients, which are sent from the edge layer to the fog. Level-1 diagnosis is done in the fog to manage the storage and computation issues. An optimized and normalized lightweight MLP has been proposed along with preprocessing and feature reduction techniques in this study for the Level-1 Diagnosis in the fog computing environment. Pearson Correlation coefficient has been calculated between independent and target features to aid in feature reduction. Techniques like K-fold cross-validation, batch normalization, and grid search optimization have been used for increasing the efficiency. A variety of metrics have been computed to assess the effectiveness of the model. Since the suggested model is a "black box," explainable artificial intelligence (XAI) tools such as SHAP and LIME have been used to help explain its predictions. An exceptional accuracy of 92% is attained with the small dataset using the proposed model. The fog layer sends the list of probable cases to the edge layer. Also, a precision of 100% and an F1 score of 90% have been attained using the proposed model. The list of probable cases is sent from the fog layer to the edge layer, where Level-2 Diagnosis is carried out. Level-2 diagnosis is based on the serological test report of the suspected patients of the Level-1 diagnosis. Level-2 diagnosis is done at the edge using the rule-based inference method. This study incorporates dual-level diagnosis, which is not seen in recent studies. The majority of investigations end at Level 1. However, this study minimizes incorrect treatment and fatality rates by using dual-level diagnosis and assisting in confirmation of the disease.

RevDate: 2025-05-16
CmpDate: 2025-05-13

Kotan M, Faruk Seymen Ö, Çallı L, et al (2025)

A novel methodological approach to SaaS churn prediction using whale optimization algorithm.

PloS one, 20(5):e0319998.

Customer churn is a critical concern in the Software as a Service (SaaS) sector, potentially impacting long-term growth within the cloud computing industry. The scarcity of research on customer churn models in SaaS, particularly regarding diverse feature selection methods and predictive algorithms, highlights a significant gap. Addressing this would enhance academic discourse and provide essential insights for managerial decision-making. This study introduces a novel approach to SaaS churn prediction using the Whale Optimization Algorithm (WOA) for feature selection. Results show that WOA-reduced datasets improve processing efficiency and outperform full-variable datasets in predictive performance. The study encompasses a range of prediction techniques with three distinct datasets evaluated derived from over 1,000 users of a multinational SaaS company: the WOA-reduced dataset, the full-variable dataset, and the chi-squared-derived dataset. These three datasets were examined with the most used in literature, k-nearest neighbor, Decision Trees, Naïve Bayes, Random Forests, and Neural Network techniques, and the performance metrics such as Area Under Curve, Accuracy, Precision, Recall, and F1 Score were used as classification success. The results demonstrate that the WOA-reduced dataset outperformed the full-variable and chi-squared-derived datasets regarding performance metrics.

RevDate: 2025-05-13

Al-Rubaie A (2025)

From Cadavers to Codes: The Evolution of Anatomy Education Through Digital Technologies.

Medical science educator, 35(2):1101-1109.

This review examines the shift from traditional anatomy education to the integration of advanced digital technologies. With rapid advancements in digital tools, such as 3D models, virtual dissections, augmented reality (AR) and virtual reality (VR), anatomy education is increasingly adopting digital environments to enhance learning. These tools offer immersive, interactive experiences, supporting active learning and knowledge retention. Mobile technology and cloud computing have further increased accessibility, allowing flexible, self-paced learning. Despite challenges like educator resistance and institutional barriers, the continued innovation and integration of digital tools have the potential to transform anatomy education and improve medical outcomes.

RevDate: 2025-05-12

Lounissi E, Das SK, Peter R, et al (2025)

FunDa: scalable serverless data analytics and in situ query processing.

Journal of big data, 12(1):116.

The pay-what-you-use model of serverless Cloud computing (or serverless, for short) offers significant benefits to the users. This computing paradigm is ideal for short running ephemeral tasks, however, it is not suitable for stateful long running tasks, such as complex data analytics and query processing. We propose FunDa, an on-premises serverless data analytics framework, which extends our previously proposed system for unified data analytics and in situ SQL query processing called DaskDB. Unlike existing serverless solutions, which struggle with stateful and long running data analytics tasks, FunDa overcomes their limitations. Our ongoing research focuses on developing a robust architecture for FunDa, enabling true serverless in on-premises environments, while being able to operate on a public Cloud, such as AWS Cloud. We have evaluated our system on several benchmarks with different scale factors. Our experimental results in both on-premises and AWS Cloud settings demonstrate FunDa's ability to support automatic scaling, low-latency execution of data analytics workloads, and more flexibility to serverless users.

RevDate: 2025-06-13
CmpDate: 2025-06-13

Wertheim JO, Vasylyeva TI, Wood RJ, et al (2025)

Phylogeographic and genetic network assessment of COVID-19 mitigation protocols on SARS-CoV-2 transmission in university campus residences.

EBioMedicine, 116:105729.

BACKGROUND: Congregate living provides an ideal setting for SARS-CoV-2 transmission in which many outbreaks and superspreading events occurred. To avoid large outbreaks, universities turned to remote operations during the initial COVID-19 pandemic waves in 2020 and 2021. In late-2021, the University of California San Diego (UC San Diego) facilitated the return of students to campus with comprehensive testing, vaccination, masking, wastewater surveillance, and isolation policies.

METHODS: We performed molecular epidemiological and phylogeographic analysis of 4418 SARS-CoV-2 genomes sampled from UC San Diego students during the Omicron waves between December 2021 and September 2022, representing 58% of students with confirmed SARS-CoV-2 infection. We overlaid these analyses across on-campus residential information to assess the spread and persistence of SARS-CoV-2 within university residences.

FINDINGS: Within campus residences, SARS-CoV-2 transmission was frequent among students residing in the same room or suite. However, a quarter of pairs of suitemates with concurrent infections had distantly related viruses, suggesting separate sources of infection during periods of high incidence in the surrounding community. Students with concurrent infections residing in the same building were not at substantial increased probability of being members of the same transmission cluster. Genetic network and phylogeographic inference indicated that only between 3.1 and 12.4% of infections among students could be associated with transmission within buildings outside of individual suites. The only super-spreading event we detected was related to a large event outside campus residences.

INTERPRETATION: We found little evidence for sustained SARS-CoV-2 transmission within individual buildings, aside from students who resided in the same suite. Even in the face of heightened community transmission during the 2021-2022 Omicron waves, congregate living did not result in a heightened risk for SARS-CoV-2 transmission in the context of the multi-pronged mitigation strategy.

FUNDING: SEARCH Alliance: Centers for Disease Control and Prevention (CDC) BAA (75D301-22-R-72097) and the Google Cloud Platform Research Credits Program. J.O.W.: NIH-NIAID (R01 AI135992). T.I.V.: Branco Weiss Fellowship and Newkirk Fellowship. L.L.: University of California San Diego.

RevDate: 2025-05-12

Song Y (2025)

Privacy-preserving and verifiable spectral graph analysis in the cloud.

Scientific reports, 15(1):16237.

Resorting to cloud computing for spectral graph analysis on large-scale graph data is becoming increasingly popular. However, given the intrusive and opaque natures of cloud services, privacy, and misbehaving cloud that returns incorrect results have raised serious concerns. Current schemes are proposed for privacy alone under the semi-honest model, while disregarding the realistic threat posed by the misbehaving cloud that might skip computationally intensive operations for economic gain. Additionally, existing verifiable computation techniques prove inadequate for the specialized requirements of spectral graph analysis, either due to compatibility issues with privacy-preserving protocols or the excessive computational burden they impose on resource-constrained users. To tackle the above two issues in a holistic solution, we present, tailor, and evaluate PVG, a privacy-preserving and verifiable framework for spectral graph analytics in the cloud for the first time. PVG concentrates on the eigendecomposition process, and provides strong privacy for graph data while enabling users to validate the accuracy of the outcomes yielded by the cloud. For this, we first design a new additive publicly verifiable computation algorithm, APVC, that can verify the accuracy of the result of the core operation (matrix multiplication) in eigendecomposition returned by cloud servers. We then propose three secure and verifiable functions for eigendecomposition based on APVC and lightweight cryptography. Extensive experiments on three manually generated and two real-world social graph datasets indicate that PVG's accuracy is consistent with plaintext, with practically affordable performance superior to prior art.

RevDate: 2025-05-27

Siddiqui N, Lee B, Yi V, et al (2025)

Celeste: A cloud-based genomics infrastructure with variant-calling pipeline suited for population-scale sequencing projects.

medRxiv : the preprint server for health sciences.

BACKGROUND: The All of Us Research Program (All of Us) is one of the world's largest sequencing efforts that will generate genetic data for over one million individuals from diverse backgrounds. This historic megaproject will create novel research platforms that integrate an unprecedented amount of genetic data with longitudinal health information. Here, we describe the design of Celeste, a resilient, open-source cloud architecture for implementing genomics workflows that has successfully analyzed petabytes of participant genomic information for All of Us - thereby enabling other large-scale sequencing efforts with a comprehensive set of tools to power analysis. The Celeste infrastructure is tremendously scalable and has routinely processed fluctuating workloads of up to 9,000 whole-genome sequencing (WGS) samples for All of Us, monthly. It also lends itself to multiple projects. Serverless technology and container orchestration form the basis of Celeste's system for managing this volume of data.

RESULTS: In 12 months of production (within a single Amazon Web Services (AWS) Region), around 200 million serverless functions and over 20 million messages coordinated the analysis of 1.8 million bioinformatics, quality control, and clinical reporting jobs. Adapting WGS analysis to clinical projects requires adaptation of variant-calling methods to enrich the reliable detection of variants with known clinical importance. Thus, we also share the process by which we tuned the variant-calling pipeline in use by the multiple genome centers supporting All of Us to maximize precision and accuracy for low fraction variant calls with clinical significance.

CONCLUSIONS: When combined with hardware-accelerated implementations for genomic analysis, Celeste had far-reaching, positive implications for turn-around time, dynamic scalability, security, and storage of analysis for one hundred-thousand whole-genome samples and counting. Other groups may align their sequencing workflows to this harmonized pipeline standard, included within the Celeste framework, to meet clinical requisites for population-scale sequencing efforts. Celeste is available as an Amazon Web Services (AWS) deployment in GitHub, and includes command-line parameters and software containers.

RevDate: 2025-05-09

Adams JI, Kutschera E, Hu Q, et al (2025)

rMATS-cloud: Large-scale Alternative Splicing Analysis in the Cloud.

Genomics, proteomics & bioinformatics pii:8127209 [Epub ahead of print].

Although gene expression analysis pipelines are often a standard part of bioinformatics analysis, with many publicly available cloud workflows, cloud-based alternative splicing analysis tools remain limited. Our lab released rMATS in 2014 and has continuously maintained it, providing a fast and versatile solution for quantifying alternative splicing from RNA sequencing (RNA-seq) data. Here, we present rMATS-cloud, a portable version of the rMATS workflow that can be run in virtually any cloud environment suited for biomedical research. We compared the time and cost of running rMATS-cloud with two RNA-seq datasets on three different platforms (Cavatica, Terra, and Seqera). Our findings demonstrate that rMATS-cloud handles RNA-seq datasets with thousands of samples, and therefore is ideally suited for the storage capacities of many cloud data repositories. rMATS-cloud is available at https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-cwl, https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-wdl, and https://dockstore.org/workflows/github.com/Xinglab/rmats-turbo/rmats-turbo-nextflow.

RevDate: 2025-05-09

Lee H, Lee S, S Lee (2025)

Visibility-Aware Multi-View Stereo by Surface Normal Weighting for Occlusion Robustness.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

Recent learning-based multi-view stereo (MVS) still exhibits insufficient accuracy in large occlusion cases, such as environments with significant inter-camera distance or when capturing objects with complex shapes. This is because incorrect image features extracted from occluded areas serve as significant noise in the cost volume construction. To address this, we propose a visibility-aware MVS using surface normal weighting (SnowMVSNet) based on explicit 3D geometry. It selectively suppresses mismatched features in the cost volume construction by computing inter-view visibility. Additionally, we present a geometry-guided cost volume regularization that enhances true depth among depth hypotheses using a surface normal prior. We also propose intra-view visibility that distinguishes geometrically more visible pixels within a reference view. Using intra-view visibility, we introduce the visibility-weighted training and depth estimation methods. These methods enable the network to achieve accurate 3D point cloud reconstruction by focusing on visible regions. Based on simple inter-view and intra-view visibility computations, SnowMVSNet accomplishes substantial performance improvements relative to computational complexity, particularly in terms of occlusion robustness. To evaluate occlusion robustness, we constructed a multi-view human (MVHuman) dataset containing general human body shapes prone to self-occlusion. Extensive experiments demonstrated that SnowMVSNet significantly outperformed state-of-the-art methods in both low- and high-occlusion scenarios.

RevDate: 2025-05-08

Mareuil F, Torchet R, Ruano LC, et al (2025)

InDeepNet: a web platform for predicting functional binding sites in proteins using InDeep.

Nucleic acids research pii:8126900 [Epub ahead of print].

Predicting functional binding sites in proteins is crucial for understanding protein-protein interactions (PPIs) and identifying drug targets. While various computational approaches exist, many fail to assess PPI ligandability, which often involves conformational changes. We introduce InDeepNet, a web-based platform integrating InDeep, a deep-learning model for binding site prediction, with InDeepHolo, which evaluates a site's propensity to adopt a ligand-bound (holo) conformation. InDeepNet provides an intuitive interface for researchers to upload protein structures from in-house data, the Protein Data Bank (PDB), or AlphaFold, predicting potential binding sites for proteins or small molecules. Results are presented as interactive 3D visualizations via Mol*, facilitating structural analysis. With InDeepHolo, the platform helps select conformations optimal for small-molecule binding, improving structure-based drug design. Accessible at https://indeep-net.gpu.pasteur.cloud/, InDeepNet removes the need for specialized coding skills or high-performance computing, making advanced predictive models widely available. By streamlining PPI target assessment and ligandability prediction, it assists research and supports therapeutic development targeting PPIs.

RevDate: 2025-05-09

Tamantini C, Marra F, Di Tocco J, et al (2025)

SenseRisc: An instrumented smart shirt for risk prevention in the workplace.

Wearable technologies, 6:e20.

The integration of wearable smart garments with multiple sensors has gained momentum, enabling real-time monitoring of users' vital parameters across various domains. This study presents the development and validation of an instrumented smart shirt for risk prevention in workplaces designed to enhance worker safety and well-being in occupational settings. The proposed smart shirt is equipped with sensors for collecting electrocardiogram, respiratory waveform, and acceleration data, with signal conditioning electronics and Bluetooth transmission to the mobile application. The mobile application sends the data to the cloud platform for subsequent Preventive Risk Index (PRI) extraction. The proposed SenseRisc system was validated with eight healthy participants during the execution of different physically exerting activities to assess the capability of the system to capture physiological parameters and estimate the PRI of the worker, and user subjective perception of the instrumented intelligent shirt.

RevDate: 2025-05-10

Baskar R, E Mohanraj (2025)

Hybrid multi objective marine predators algorithm based clustering for lightweight resource scheduling and application placement in fog.

Scientific reports, 15(1):15953.

The Internet of Things (IoT) has boosted fog computing, which complements the cloud. This is critical for applications that need close user proximity. Efficient allocation of IoT applications to the fog, as well as fog device scheduling, enabling the realistic execution of IoT application deployment in the fog environment. The scheduling difficulties are multi-objective in nature, since they must handle the issues of avoiding resource waste, network latency, and maximising Quality of Service (QoS) on fog nodes. In this research, the Hybrid Multi-Objective Marine Predators Algorithm-based Clustering and Fog Picker (HMMPACFP) Technique is developed as a combinatorial model for tackling the problem of fog node allocation, with the goal of achieving dynamic scheduling using lightweight characteristics. Utilised Fog Picker to allocate IoT components to fog nodes based on QoS parameters. Simulation trials of the proposed HMMPACFP scheme utilising iMetal and iFogSim with Hypervolume (HV) and Generational Distance (IGD) demonstrated its superiority over the benchmarked methodologies utilised for evaluation. The combination of Fog Picker with the suggested HMMPACFP scheme resulted in 32.18% faster convergence, 26.92% more solution variety, and a better balance between exploration and exploitation rates.

RevDate: 2025-05-06
CmpDate: 2025-05-06

Mohanty S, PC Pandey (2025)

Spatiotemporal dynamics of Ramsar wetlands and freshwater resources: Technological innovations for ecosystem conservation.

Water environment research : a research publication of the Water Environment Federation, 97(5):e70072.

Aquatic ecosystems, particularly wetlands, are vulnerable to natural and anthropogenic influences. This study examines the Saman Bird Sanctuary and Keetham Lake, both Ramsar sites, using advanced remote sensing for water occurrence, land use and land cover (LULC), and water quality assessments. Sentinel data, processed in cloud computing, enabled land-use classification, water boundary delineation, and seasonal water occurrence mapping. A combination of Modified Normalized Difference Water Index (MNDWI), OTSU threshold segmentation, and Canny edge detection provided precise seasonal water boundaries. Study utilized a combination of the MNDWI, OTSU threshold segmentation, and Canny edge detection methods. These approaches allowed for precise delineation of seasonal water boundaries. Sixteen water quality parameters including pH, turbidity, dissolved oxygen (DO), chemical oxygen demand (COD), total hardness (TH), total alkalinity (TA), total dissolved solid (TDS), electrical conductivity (EC), phosphates (PO4), nitrate (NO3), chloride (Cl[-]), fluoride (F[-]), carbon dioxide (CO2), silica (Si), iodine (I[-]), and chromium (Cr[-]) were analyzed and compared for both sites. Results showed significant LULC changes, particularly at Saman, with scrub forest, built-up areas, and agriculture increasing, while flooded vegetation and open water declined. Significant LULC changes were observed near Marsh wetland, where positive changes up to 42.17% were seen for built-up in surrounding regions, with an increase to 5.43 ha in 2021 from 3.14 ha in 2017. Positive change was observed for scrub forests up to 21.02%, with a rise of 2.18 ha. Vegetation in the marsh region, including seasonal grasses and hydrophytes, has shown an increase in extent up to 0.39 ha with a rise of 7.12%. Spatiotemporal water occurrence was analyzed across pre-monsoon, monsoon, and post-monsoon seasons using Sentinel-1 data. The study highlights the role of remote sensing and field-based water quality monitoring in understanding ecological shifts and anthropogenic pressures on wetlands. By integrating land-use changes and water quality analysis, this research provides critical information for planning and conservation efforts. It provides vital insights for conservation planning, advocating for continued monitoring and adaptive management to sustain these critical ecosystems. PRACTITIONER POINTS: Spatiotemporal surface water occurrence at two geographically different wetlands-lake and marsh wetland; LULC and its change analysis to evaluate the impact on wetlands and its surrounding environment-positive and negative changes; Boundary delineation to examine changes and identify low-lying areas during the pre- and post-monsoon; Comparative analysis of the water quality of two different wetlands; Insectivorous plant-Utricularia stellaris, was recorded from Northern India at the Saman Bird Sanctuary for the first time.

RevDate: 2025-05-16
CmpDate: 2025-05-16

Xu W, Althumayri M, Tarman AY, et al (2025)

An integrated wearable fluorescence sensor for E. coli detection in catheter bags.

Biosensors & bioelectronics, 283:117539.

Urinary tract infections (UTIs), including catheter-associated UTIs (CAUTIs), affect millions worldwide. Traditional diagnostic methods, like urinalysis and urine culture, have limitations-urinalysis is fast but lacks sensitivity, while urine culture is accurate but takes up to two days. Here, we present an integrated wearable fluorescence sensor to detect UTI-related bacterial infections early at the point of care by on-body monitoring. The sensor features a hardware platform with a flexible PCB that attaches to a urine catheter bag, emitting excitation light and detecting emission light of E. coli-specific enzymatic reaction for continuous monitoring. Our custom-developed smartphone application allows remote control and data transfer via Bluetooth and performs in situ data analysis without cloud computing. The performance of the device was demonstrated by detecting E. coli at concentrations of 10[0]-10[5] CFU/mL within 9 to 3.5 h, respectively, with high sensitivity and by testing the specificity using Gram-positive (i.e., Staphylococcus epidermidis) and Gram-negative (i.e., Pseudomonas aeruginosa and Klebsiella pneumoniae) pathogens. An in vitro bladder model testing was performed using E.coli-spiked human urine samples to further evaluate the device's practicality. This portable, cost-effective device has the potential to transform the clinical practice of UTI diagnosis with automated and rapid bacterial detection at the point of care.

RevDate: 2025-05-03

Peacock JG, Cole R, Duncan J, et al (2025)

Transforming Military Healthcare Education and Training: AI Integration for Future Readiness.

Military medicine pii:8124498 [Epub ahead of print].

INTRODUCTION: Artificial intelligence (AI) technologies have spread throughout the world and changed the way that many social functions are conducted, including health care. Future large-scale combat missions will likely require health care professionals to utilize AI tools among other tools in providing care for the Warfighter. Despite the need for an AI-capable health care force, medical education lacks an integration of medical AI knowledge. The purpose of this manuscript was to review ways that military health care education can be improved with an understanding of and using AI technologies.

MATERIALS AND METHODS: This article is a review of the literature regarding the integration of AI technologies in medicine and medical education. We do provide examples of quotes and images from a larger USU study on a Faculty Development program centered on learning about AI technologies in health care education. The study is not complete and is not the focus of this article, but was approved by the USU IRB.

RESULTS: Effective integration of AI technologies in military health care education requires military health care educators that are willing to learn how to safely, effectively, and ethically use AI technologies in their own administrative, educational, research, and clinical roles. Together with health care trainees, these faculties can help to build and co-create AI-integrated curricula that will accelerate and enhance the military health care curriculum of tomorrow. Trainees can begin to use generative AI tools, like large language models, to begin to develop their skills and practice the art of generating high-quality AI tools that will improve their studies and prepare them to improve military health care. Integration of AI technologies in the military health care environment requires close military-industry collaborations with AI and security experts to ensure personal and health care information security. Through secure cloud computing, blockchain technologies, and Application Programming Interfaces, among other technologies, military health care facilities and systems can safely integrate AI technologies to enhance patient care, clinical research, and health care education.

CONCLUSIONS: AI technologies are not a dream of the future, they are here, and they are being integrated and implemented in military health care systems. To best prepare the military health care professionals of the future for the reality of medical AI, we must reform military health care education through a combined effort of faculty, students, and industry partners.

RevDate: 2025-05-05

Liu K, Liu YJ, B Chen (2025)

General 3D Vision-Language Model with Fast Rendering and Pre-training Vision-Language Alignment.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

Deep neural network models have achieved remarkable progress in 3D scene understanding while trained in the closed-set setting and with full labels. However, the major bottleneck for the current 3D recognition approach is that these models do not have the capacity to recognize any unseen novel classes beyond the training categories in diverse real-world applications. In the meantime, current state-of-the-art 3D scene understanding approaches primarily require a large number of high-quality labels to train neural networks, which merely perform well in a fully supervised manner. Therefore, we are in urgent need of a framework that can simultaneously be applicable to both 3D point cloud segmentation and detection, particularly in the circumstances where the labels are rather scarce. This work presents a generalized and straightforward framework for dealing with 3D scene understanding when the labeled scenes are quite limited. To extract knowledge for novel categories from the pre-trained vision-language models, we propose a hierarchical feature-aligned pre-training and knowledge distillation strategy to extract and distill meaningful information from large-scale vision-language models, which helps benefit the open-vocabulary scene understanding tasks. To leverage the boundary information, we propose a novel energy-based loss with boundary awareness benefiting from the region-level boundary predictions. To encourage latent instance discrimination and to guarantee efficiency, we propose the unsupervised region-level semantic contrastive learning scheme for point clouds, using confident predictions of the neural network to discriminate the intermediate feature embeddings at multiple stages. In the limited reconstruction case, our proposed approach, termed WS3D++, ranks 1st on the large-scale ScanNet benchmark on both the task of semantic segmentation and instance segmentation. Also, our proposed WS3D++ achieves state-of-the-art data-efficient learning performance on the other large-scale real-scene indoor and outdoor datasets S3DIS and SemanticKITTI. Extensive experiments with both indoor and outdoor scenes demonstrated the effectiveness of our approach in both data-efficient learning and open-world few-shot learning. All codes, models, and data are to made publicly available at: https://github.com/KangchengLiu. The code is at: https://drive.google.com/drive/folders/1M58V-PtR8DBEwD296zJkNg_m2qq-MTAP Code link.

RevDate: 2025-05-05

Kathole AB, Singh VK, Goyal A, et al (2025)

Novel load balancing mechanism for cloud networks using dilated and attention-based federated learning with Coati Optimization.

Scientific reports, 15(1):15268.

Load balancing (LB) is a critical aspect of Cloud Computing (CC), enabling efficient access to virtualized resources over the internet. It ensures optimal resource utilization and smooth system operation by distributing workloads across multiple servers, preventing any server from being overburdened or underutilized. This process enhances system reliability, resource efficiency, and overall performance. As cloud computing expands, effective resource management becomes increasingly important, particularly in distributed environments. This study proposes a novel approach to resource prediction for cloud network load balancing, incorporating federated learning within a blockchain framework for secure and distributed management. The model leverages Dilated and Attention-based 1-Dimensional Convolutional Neural Networks with bidirectional long short-term memory (DA-DBL) to predict resource needs based on factors such as processing time, reaction time, and resource availability. The integration of the Random Opposition Coati Optimization Algorithm (RO-COA) enables flexible and efficient load distribution in response to real-time network changes. The proposed method is evaluated on various metrics, including active servers, makespan, Quality of Service (QoS), resource utilization, and power consumption, outperforming existing approaches. The results demonstrate that the combination of federated learning and the RO-COA-based load balancing method offers a robust solution for enhancing cloud resource management.

RevDate: 2025-05-02

Zhang H, Liu M, Liu W, et al (2025)

Performance and energy optimization of ternary optical computers based on tandem queuing system.

Scientific reports, 15(1):15037.

As an emerging computer technology with numerous bits, bit-wise allocation, and extensive parallelism, the ternary optical computer (TOC) will play an important role in platforms such as cloud computing and big data. Previous studies on TOC in handling computational request tasks have mainly focused on performance enhancement while ignoring the impact of performance enhancement on power consumption. The main objective of this study is to investigate the optimization trade-off between performance and energy consumption in TOC systems. To this end, the service model of the TOC is constructed by introducing the M/M/1 and M/M/c models in queuing theory, combined with the framework of the tandem queueing system, and the optimization problem is studied by adjusting the processor partitioning strategy and the number of small TOC (STOC) in the service process. The results show that the value of increasing active STOCs is prominent when system performance significantly depends on response time. However, marginal gains decrease as the number of STOCs grows, accompanied by rising energy costs. Based on these findings, this paper constructs a bi-objective optimization model using response time and energy consumption. It proposes an optimization strategy to achieve bi-objective optimization of performance and energy consumption for TOC by identifying the optimal partitioning strategy and the number of active small optical processors for different load conditions.

RevDate: 2025-05-13
CmpDate: 2025-04-30

Zhu Q, Li Z, Dong J, et al (2025)

Spatiotemporal dataset of dengue influencing factors in Brazil based on geospatial big data cloud computing.

Scientific data, 12(1):712.

Dengue fever has been spreading rapidly worldwide, with a notably high prevalence in South American countries such as Brazil. Its transmission dynamics are governed by the vector population dynamics and the interactions among humans, vectors, and pathogens, which are further shaped by environmental factors. Calculating these environmental indicators is challenging due to the limited spatial coverage of weather station observations and the time-consuming processes involved in downloading and processing local data, such as satellite imagery. This issue is exacerbated in large-scale studies, making it difficult to develop comprehensive and publicly accessible datasets of disease-influencing factors. Addressing this challenge necessitates the efficient data integration methods and the assembly of multi-factorial datasets to aid public health authorities in understanding dengue transmission mechanisms and improving risk prediction models. In response, we developed a population-weighted dataset of 12 dengue risk factors, covering 558 microregions in Brazil over 1252 epidemiological weeks from 2001 to 2024. This dataset and the associated methodology streamline data processing for researchers and can be adapted for other vector-borne disease studies.

RevDate: 2025-04-30

Yang S (2025)

Privacy-Preserving Multi-User Graph Intersection Scheme for Wireless Communications in Cloud-Assisted Internet of Things.

Sensors (Basel, Switzerland), 25(6):.

Cloud-assisted Internet of Things (IoT) has become the core infrastructure of smart society since it solves the computational power, storage, and collaboration bottlenecks of traditional IoT through resource decoupling and capability complementarity. The development of a graph database and cloud-assisted IoT promotes the research of privacy preserving graph computation. We propose a secure graph intersection scheme that supports multi-user intersection queries in cloud-assisted IoT in this article. The existing work on graph encryption for intersection queries is designed for a single user, which will bring high computational and communication costs for data owners, or cause the risk of secret key leaking if directly applied to multi-user scenarios. To solve these problems, we employ the proxy re-encryption (PRE) that transforms the encrypted graph data with a re-encryption key to enable the graph intersection results to be decrypted by an authorized IoT user using their own private key, while data owners only encrypt their graph data on IoT devices once. In our scheme, different IoT users can query for the intersection of graphs flexibly, while data owners do not need to perform encryption operations every time an IoT user makes a query. Theoretical analysis and simulation results demonstrate that the graph intersection scheme in this paper is secure and practical.

RevDate: 2025-04-30

Ficili I, Giacobbe M, Tricomi G, et al (2025)

From Sensors to Data Intelligence: Leveraging IoT, Cloud, and Edge Computing with AI.

Sensors (Basel, Switzerland), 25(6):.

The exponential growth of connected devices and sensor networks has revolutionized data collection and monitoring across industries, from healthcare to smart cities. However, the true value of these systems lies not merely in gathering data but in transforming it into actionable intelligence. The integration of IoT, cloud computing, edge computing, and AI offers a robust pathway to achieve this transformation, enabling real-time decision-making and predictive insights. This paper explores innovative approaches to combine these technologies, emphasizing their role in enabling real-time decision-making, predictive analytics, and low-latency data processing. This work analyzes several integration approaches among IoT, cloud/edge computing, and AI through examples and applications, highlighting challenges and approaches to seamlessly integrate these techniques to achieve pervasive environmental intelligence. The findings contribute to advancing pervasive environmental intelligence, offering a roadmap for building smarter, more sustainable infrastructure.

RevDate: 2025-04-30
CmpDate: 2025-04-28

Yang H, Dong R, Guo R, et al (2025)

Real-Time Acoustic Scene Recognition for Elderly Daily Routines Using Edge-Based Deep Learning.

Sensors (Basel, Switzerland), 25(6):.

The demand for intelligent monitoring systems tailored to elderly living environments is rapidly increasing worldwide with population aging. Traditional acoustic scene monitoring systems that rely on cloud computing are limited by data transmission delays and privacy concerns. Hence, this study proposes an acoustic scene recognition system that integrates edge computing with deep learning to enable real-time monitoring of elderly individuals' daily activities. The system consists of low-power edge devices equipped with multiple microphones, portable wearable components, and compact power modules, ensuring its seamless integration into the daily lives of the elderly. We developed four deep learning models-convolutional neural network, long short-term memory, bidirectional long short-term memory, and deep neural network-and used model quantization techniques to reduce the computational complexity and memory usage, thereby optimizing them to meet edge device constraints. The CNN model demonstrated superior performance compared to the other models, achieving 98.5% accuracy, an inference time of 2.4 ms, and low memory requirements (25.63 KB allocated for Flash and 5.15 KB for RAM). This architecture provides an efficient, reliable, and user-friendly solution for real-time acoustic scene monitoring in elderly care.

RevDate: 2025-04-30

Vieira D, Oliveira M, Arrais R, et al (2025)

Application of Cloud Simulation Techniques for Robotic Software Validation.

Sensors (Basel, Switzerland), 25(6):.

Continuous Integration and Continuous Deployment are known methodologies for software development that increase the overall quality of the development process. Several robotic software repositories make use of CI/CD tools as an aid to development. However, very few CI pipelines take advantage of using cloud computing to run simulations. Here, a CI pipeline is proposed that takes advantage of such features, applied to the development of ATOM, a ROS-based application capable of carrying out the calibration of generalized robotic systems. The proposed pipeline uses GitHub Actions as a CI/CD engine, AWS RoboMaker as a service for running simulations on the cloud and Rigel as a tool to both containerize ATOM and execute the tests. In addition, a static analysis and unit testing component is implemented with the use of Codacy. The creation of the pipeline was successful, and it was concluded that it constitutes a valuable tool for the development of ATOM and a blueprint for the creation of similar pipelines for other robotic systems.

RevDate: 2025-04-30

Chang YH, Wu FC, HW Lin (2025)

Design and Implementation of ESP32-Based Edge Computing for Object Detection.

Sensors (Basel, Switzerland), 25(6):.

This paper explores the application of the ESP32 microcontroller in edge computing, focusing on the design and implementation of an edge server system to evaluate performance improvements achieved by integrating edge and cloud computing. Responding to the growing need to reduce cloud burdens and latency, this research develops an edge server, detailing the ESP32 hardware architecture, software environment, communication protocols, and server framework. A complementary cloud server software framework is also designed to support edge processing. A deep learning model for object recognition is selected, trained, and deployed on the edge server. Performance evaluation metrics, classification time, MQTT (Message Queuing Telemetry Transport) transmission time, and data from various MQTT brokers are used to assess system performance, with particular attention to the impact of image size adjustments. Experimental results demonstrate that the edge server significantly reduces bandwidth usage and latency, effectively alleviating the load on the cloud server. This study discusses the system's strengths and limitations, interprets experimental findings, and suggests potential improvements and future applications. By integrating AI and IoT, the edge server design and object recognition system demonstrates the benefits of localized edge processing in enhancing efficiency and reducing cloud dependency.

RevDate: 2025-04-30

Alsadie D, M Alsulami (2025)

Modified grey wolf optimization for energy-efficient internet of things task scheduling in fog computing.

Scientific reports, 15(1):14730.

Fog-cloud computing has emerged as a transformative paradigm for managing the growing demands of Internet of Things (IoT) applications, where efficient task scheduling is crucial for optimizing system performance. However, existing task scheduling methods often struggle to balance makespan minimization and energy efficiency in dynamic and resource-constrained fog-cloud environments. Addressing this gap, this paper introduces a novel Task Scheduling algorithm based on a modified Grey Wolf Optimization approach (TS-GWO), tailored specifically for IoT requests in fog-cloud systems. The proposed TS-GWO incorporates innovative operators to enhance exploration and exploitation capabilities, enabling the identification of optimal scheduling solutions. Extensive evaluations using both synthetic and real-world datasets, such as NASA Ames iPSC and HPC2N workloads, demonstrate the superior performance of TS-GWO over established metaheuristic methods. Notably, TS-GWO achieves improvements in makespan by up to 46.15% and reductions in energy consumption by up to 28.57%. These results highlight the potential of TS-GWO to effectively address task scheduling challenges in fog-cloud environments, paving the way for its application in broader optimization tasks.

RevDate: 2025-04-28
CmpDate: 2025-04-26

Yang H, Xiong M, Y Yao (2025)

MODIS-Based Spatiotemporal Inversion and Driving-Factor Analysis of Cloud-Free Vegetation Cover in Xinjiang from 2000 to 2024.

Sensors (Basel, Switzerland), 25(8):.

The Xinjiang Uygur Autonomous Region, characterized by its complex and fragile ecosystems, has faced ongoing ecological degradation in recent years, challenging national ecological security and sustainable development. To promote the sustainable development of regional ecological and landscape conservation, this study investigates Fractional Vegetation Cover (FVC) dynamics in Xinjiang. Existing studies often lack recent data and exhibit limitations in the selection of driving factors. To mitigate the issues, this study utilized Google Earth Engine (GEE) and cloud-free MOD13A2.061 data to systematically generate comprehensive FVC products for Xinjiang from 2000 to 2024. Additionally, a comprehensive and quantitative analysis of up to 15 potential driving factors was conducted, providing an updated and more robust understanding of vegetation dynamics in the region. This study integrated advanced methodologies, including spatiotemporal statistical analysis, optimized spatial scaling, trend analysis, and Geographical Detector (GeoDetector). Notably, we propose a novel approach combining a Theil-Sen Median trend analysis with a Hurst index to predict future vegetation trends, which to some extent enhances the persuasiveness of the Hurst index alone. The following are the key experimental results: (1) Over the 25-year study period, Xinjiang's vegetation cover exhibited a pronounced north-south gradient, with significantly higher FVC in the northern regions compared to the southern regions. (2) A time series analysis revealed an overall fluctuating upward trend in the FVC, accompanied by increasing volatility and decreasing stability over time. (3) Identification of 15 km as the optimal spatial scale for FVC analysis through spatial statistical analysis using Moran's I and the coefficient of variation. (4) Land use type, vegetation type, and soil type emerged as critical factors, with each contributing over 20% to the explanatory power of FVC variations. (5) To elucidate spatial heterogeneity mechanisms, this study conducted ecological subzone-based analyses of vegetation dynamics and drivers.

RevDate: 2025-05-02
CmpDate: 2025-04-26

Ahoa E, Kassahun A, Verdouw C, et al (2025)

Challenges and Solution Directions for the Integration of Smart Information Systems in the Agri-Food Sector.

Sensors (Basel, Switzerland), 25(8):.

Traditional farming has evolved from standalone computing systems to smart farming, driven by advancements in digitalization. This has led to the proliferation of diverse information systems (IS), such as IoT and sensor systems, decision support systems, and farm management information systems (FMISs). These systems often operate in isolation, limiting their overall impact. The integration of IS into connected smart systems is widely addressed as a key driver to tackle these issues. However, it is a complex, multi-faceted issue that is not easily achievable. Previous studies have offered valuable insights, but they often focus on specific cases, such as individual IS and certain integration aspects, lacking a comprehensive overview of various integration dimensions. This systematic review of 74 scientific papers on IS integration addresses this gap by providing an overview of the digital technologies involved, integration levels and types, barriers hindering integration, and available approaches to overcoming these challenges. The findings indicate that integration primarily relies on a point-to-point approach, followed by cloud-based integration. Enterprise service bus, hub-and-spoke, and semantic web approaches are mentioned less frequently but are gaining interest. The study identifies and discusses 27 integration challenges into three main areas: organizational, technological, and data governance-related challenges. Technologies such as blockchain, data spaces, AI, edge computing and microservices, and service-oriented architecture methods are addressed as solutions for data governance and interoperability issues. The insights from the study can help enhance interoperability, leading to data-driven smart farming that increases food production, mitigates climate change, and optimizes resource usage.

RevDate: 2025-04-25

Pietris J, Bahrami B, LaHood B, et al (2025)

Cataract Surgery Registries: History, Utility, Barriers and Future.

Journal of cataract and refractive surgery pii:02158034-990000000-00604 [Epub ahead of print].

Cataract surgery databases have become indispensable tools in ophthalmology, providing extensive data that enhance surgical practices and patient care. This narrative review traces the development of these databases, and summarises some of the significant contributions of these databases, such as improved surgical outcomes, informed clinical guidelines, and enhanced quality assurance. There are significant barriers to establishing and maintaining cataract surgery databases, including data protection and management challenges, economic constraints, technological hurdles, and ethical considerations. These obstacles complicate efforts to ensure data accuracy, standardisation, and interoperability across diverse healthcare settings. Large language models, and artificial intelligence has potential in streamlining data collection and analysis for the future of these databases. Innovations like blockchain for data security and cloud computing for scalability are examined as solutions to current limitations. Addressing the existing challenges and leveraging technological advancements will be crucial for the continued evolution and utility of these databases, ensuring they remain pivotal in advancing cataract surgery and patient care.

RevDate: 2025-04-24

Beyvers S, Jelonek L, Goesmann A, et al (2025)

Bakta Web - rapid and standardized genome annotation on scalable infrastructures.

Nucleic acids research pii:8118971 [Epub ahead of print].

The Bakta command line application is widely used and one of the most established tools for bacterial genome annotation. It balances comprehensive annotation with computational efficiency via alignment-free sequence identifications. However, the usage of command line software tools and the interpretation of result files in various formats might be challenging and pose technical barriers. Here, we present the recent updates on the Bakta web server, a user-friendly web interface for conducting and visualizing annotations using Bakta without requiring command line expertise or local computing resources. Key features include interactive visualizations through circular genome plots, linear genome browsers, and searchable data tables facilitating the interpretation of complex annotation results. The web server generates standard bioinformatics outputs (GFF3, GenBank, EMBL) and annotates diverse genomic features, including coding sequences, non-coding RNAs, small open reading frames (sORFs), and many more. The development of an auto-scaling cloud-native architecture and improved database integration led to substantially faster processing times and higher throughputs. The system supports FAIR principles via extensive cross-reference links to external databases, including RefSeq, UniRef, and Gene Ontology. Also, novel features have been implemented to foster sharing and collaborative interpretation of results. The web server is freely available at https://bakta.computational.bio.

RevDate: 2025-05-01
CmpDate: 2025-05-01

Xiao J, Wu J, Liu D, et al (2025)

Improved Pine Wood Nematode Disease Diagnosis System Based on Deep Learning.

Plant disease, 109(4):862-874.

Pine wilt disease caused by the pine wood nematode, Bursaphelenchus xylophilus, has profound implications for global forestry ecology. Conventional PCR methods need long operating time and are complicated to perform. The need for rapid and effective detection methodologies to curtail its dissemination and reduce pine felling has become more apparent. This study initially proposed the use of fluorescence recognition for the detection of pine wood nematode disease, accompanied by the development of a dedicated fluorescence detection system based on deep learning. This system possesses the capability to perform excitation, detection, as well as data analysis and transmission of test samples. In exploring fluorescence recognition methodologies, the efficacy of five conventional machine learning algorithms was juxtaposed with that of You Only Look Once version 5 and You Only Look Once version 10, both in the pre- and post-image processing stages. Moreover, enhancements were introduced to the You Only Look Once version 5 model. The network's aptitude for discerning features across varied scales and resolutions was bolstered through the integration of Res2Net. Meanwhile, a SimAM attention mechanism was incorporated into the backbone network, and the original PANet structure was replaced by the Bi-FPN within the Head network to amplify feature fusion capabilities. The enhanced YOLOv5 model demonstrates significant improvements, particularly in the recognition of large-size images, achieving an accuracy improvement of 39.98%. The research presents a novel detection system for pine nematode detection, capable of detecting samples with DNA concentrations as low as 1 fg/μl within 20 min. This system integrates detection instruments, laptops, cloud computing, and smartphones, holding tremendous potential for field application.

RevDate: 2025-05-13

Yin Y, Liu B, Zhang Y, et al (2025)

Wafer-Scale Nanoprinting of 3D Interconnects beyond Cu.

ACS nano, 19(18):17578-17588.

Cloud operations and services, as well as many other modern computing tasks, require hardware that is run by very densely packed integrated circuits (ICs) and heterogenous ICs. The performance of these ICs is determined by the stability and properties of the interconnects between the semiconductor devices and ICs. Although some ICs with 3D interconnects are commercially available, there has been limited progress on 3D printing utilizing emerging nanomaterials. Moreover, laying out reliable 3D metal interconnects in ICs with the appropriate electrical and physical properties remains challenging. Here, we propose high-throughput 3D interconnection with nanoscale precision by leveraging lines of forces. We successfully nanoprinted multiscale and multilevel Au, Ir, and Ru 3D interconnects on the wafer scale in non-vacuum conditions using a pulsed electric field. The ON phase of the pulsed field initiates in situ printing of nanoparticle (NP) deposition into interconnects, whereas the OFF phase allows the gas flow to evenly distribute the NPs over an entire wafer. Characterization of the 3D interconnects confirms their excellent uniformity, electrical properties, and free-form geometries, far exceeding those of any 3D-printed interconnects. Importantly, their measured resistances approach the theoretical values calculated here. The results demonstrate that 3D nanoprinting can be used to fabricate thinner and faster interconnects, which can enhance the performance of dense ICs; therefore, 3D nanoprinting can complement lithography and resolve the challenges encountered in the fabrication of critical device features.

RevDate: 2025-04-23

Pérez-Sanpablo AI, Quinzaños-Fresnedo J, Gutiérrez-Martínez J, et al (2025)

Transforming Medical Imaging: The Role of Artificial Intelligence Integration in PACS for Enhanced Diagnostic Accuracy and Workflow Efficiency.

Current medical imaging pii:CMIR-EPUB-147831 [Epub ahead of print].

INTRODUCTION: To examine the integration of artificial intelligence (AI) into Picture Archiving and Communication Systems (PACS) and assess its impact on medical imaging, diagnostic workflows, and patient outcomes. This review explores the technological evolution, key advancements, and challenges associated with AI-enhanced PACS in healthcare settings.

METHODS: A comprehensive literature search was conducted in PubMed, Scopus, and Web of Science databases, covering articles from January 2000 to October 2024. Search terms included "artificial intelligence," "machine learning," "deep learning," and "PACS," combined with keywords related to diagnostic accuracy and workflow optimization. Articles were selected based on predefined inclusion and exclusion criteria, focusing on peerreviewed studies that discussed AI applications in PACS, innovations in medical imaging, and workflow improvements. A total of 183 studies met the inclusion criteria, comprising original research, systematic reviews, and meta-analyses.

RESULTS: AI integration in PACS has significantly enhanced diagnostic accuracy, achieving improvements of up to 93.2% in some imaging modalities, such as early tumor detection and anomaly identification. Workflow efficiency has been transformed, with diagnostic times reduced by up to 90% for critical conditions like intracranial hemorrhages. Convolutional neural networks (CNNs) have demonstrated exceptional performance in image segmentation, achieving up to 94% accuracy, and in motion artifact correction, further enhancing diagnostic precision. Natural language processing (NLP) tools have expedited radiology workflows, reducing reporting times by 30-50% and improving consistency in report generation. Cloudbased solutions have also improved accessibility, enabling real-time collaboration and remote diagnostics. However, challenges in data privacy, regulatory compliance, and interoperability persist, emphasizing the need for standardized frameworks and robust security protocols. Conclusions The integration of AI into PACS represents a pivotal transformation in medical imaging, offering improved diagnostic workflows and potential for personalized patient care. Addressing existing challenges and enhancing interoperability will be essential for maximizing the benefits of AIpowered PACS in healthcare.

RevDate: 2025-04-22

Rezaee K, Nazerian A, Ghayoumi Zadeh H, et al (2025)

Smart IoT-driven biosensors for EEG-based driving fatigue detection: A CNN-XGBoost model enhancing healthcare quality.

BioImpacts : BI, 15:30586.

INTRODUCTION: Drowsy driving is a significant contributor to accidents, accounting for 35 to 45% of all crashes. Implementation of an internet of things (IoT) system capable of alerting fatigued drivers has the potential to substantially reduce road fatalities and associated issues. Often referred to as the internet of medical things (IoMT), this system leverages a combination of biosensors, actuators, detectors, cloud-based and edge computing, machine intelligence, and communication networks to deliver reliable performance and enhance quality of life in smart societies.

METHODS: Electroencephalogram (EEG) signals offer potential insights into fatigue detection. However, accurately identifying fatigue from brain signals is challenging due to inter-individual EEG variability and the difficulty of collecting sufficient data during periods of exhaustion. To address these challenges, a novel evolutionary optimization method combining convolutional neural networks (CNNs) and XGBoost, termed CNN-XGBoost Evolutionary Learning, was proposed to improve fatigue identification accuracy. The research explored various subbands of decomposed EEG data and introduced an innovative approach of transforming EEG recordings into RGB scalograms. These scalogram images were processed using a 2D Convolutional Neural Network (2DCNN) to extract essential features, which were subsequently fed into a dense layer for training.

RESULTS: The resulting model achieved a noteworthy accuracy of 99.80% on a substantial driver fatigue dataset, surpassing existing methods.

CONCLUSION: By integrating this approach into an IoT framework, researchers effectively addressed previous challenges and established an artificial intelligence of things (AIoT) infrastructure for critical driving conditions. This IoT-based system optimizes data processing, reduces computational complexity, and enhances overall system performance, enabling accurate and timely detection of fatigue in extreme driving environments.

RevDate: 2025-04-22
CmpDate: 2025-04-19

Alzakari SA, Alamgeer M, Alashjaee AM, et al (2025)

Heuristically enhanced multi-head attention based recurrent neural network for denial of wallet attacks detection on serverless computing environment.

Scientific reports, 15(1):13538.

Denial of Wallet (DoW) attacks are a cyber threat designed to utilize and deplete an organization's financial resources by generating excessive prices or charges in their cloud computing (CC) and serverless computing platforms. These threats are primarily appropriate in serverless manners because of features such as auto-scaling, pay-as-you-go, restricted control, and cost growth. Serverless computing, frequently recognized as Function-as-a-Service (FaaS), is a CC method that permits designers to construct and run uses without the requirement to accomplish typical server structure. Detecting DoW threats involves monitoring and analyzing the system-level resource consumption of specific bare-metal mechanisms. Efficient and precise detection of internal DoW threats remains a crucial challenge. Timely recognition is significant in preventing potential damage, as DoW attacks exploit the financial model of serverless environments, impacting the cost structure and operational integrity of services. In this study, a Multi-Head Attention-based Recurrent Neural Network for Denial of Wallet Attacks Detection (MHARNN-DoWAD) technique is developed. The MHARNN-DoWAD method enables the detection of DoW attacks on serverless computing environments. At first, the presented MHARNN-DoWAD model performs data preprocessing by using min-max normalization to convert input data into constant format. Next, the wolf pack predation (WPP) method is employed for feature selection. The detection and classification of DoW attacks, the multi-head attention-based bi-directional gated recurrent unit (MHA-BiGRU) model is utilized. Eventually, the improved secretary bird optimizer algorithm (ISBOA)-based hyperparameter choice process is accomplished to optimize the detection results of the MHA-BiGRU model. A comprehensive set of simulations was conducted to demonstrate the promising results of the MHARNN-DoWAD method. The experimental validation of the MHARNN-DoWAD technique portrayed a superior accuracy value of 98.30% over existing models.

RevDate: 2025-05-01

Brito CV, Ferreira PG, JT Paulo (2025)

Exploiting Trusted Execution Environments and Distributed Computation for Genomic Association Tests.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

Breakthroughs in sequencing technologies led to an exponential growth of genomic data, providing novel biological insights and therapeutic applications. However, analyzing large amounts of sensitive data raises key data privacy concerns, specifically when the information is outsourced to untrusted third-party infrastructures for data storage and processing (e.g., cloud computing). We introduce Gyosa, a secure and privacy-preserving distributed genomic analysis solution. By leveraging trusted execution environments (TEEs), Gyosa allows users to confidentially delegate their GWAS analysis to untrusted infrastructures. Gyosa implements a computation partitioning scheme that reduces the computation done inside the TEEs while safeguarding the users' genomic data privacy. By integrating this security scheme in Glow, Gyosa provides a secure and distributed environment that facilitates diverse GWAS studies. The experimental evaluation validates the applicability and scalability of Gyosa, reinforcing its ability to provide enhanced security guarantees.

RevDate: 2025-04-20

Kocak B, Ponsiglione A, Romeo V, et al (2025)

Radiology AI and sustainability paradox: environmental, economic, and social dimensions.

Insights into imaging, 16(1):88.

Artificial intelligence (AI) is transforming radiology by improving diagnostic accuracy, streamlining workflows, and enhancing operational efficiency. However, these advancements come with significant sustainability challenges across environmental, economic, and social dimensions. AI systems, particularly deep learning models, require substantial computational resources, leading to high energy consumption, increased carbon emissions, and hardware waste. Data storage and cloud computing further exacerbate the environmental impact. Economically, the high costs of implementing AI tools often outweigh the demonstrated clinical benefits, raising concerns about their long-term viability and equity in healthcare systems. Socially, AI risks perpetuating healthcare disparities through biases in algorithms and unequal access to technology. On the other hand, AI has the potential to improve sustainability in healthcare by reducing low-value imaging, optimizing resource allocation, and improving energy efficiency in radiology departments. This review addresses the sustainability paradox of AI from a radiological perspective, exploring its environmental footprint, economic feasibility, and social implications. Strategies to mitigate these challenges are also discussed, alongside a call for action and directions for future research. CRITICAL RELEVANCE STATEMENT: By adopting an informed and holistic approach, the radiology community can ensure that AI's benefits are realized responsibly, balancing innovation with sustainability. This effort is essential to align technological advancements with environmental preservation, economic sustainability, and social equity. KEY POINTS: AI has an ambivalent potential, capable of both exacerbating global sustainability issues and offering increased productivity and accessibility. Addressing AI sustainability requires a broad perspective accounting for environmental impact, economic feasibility, and social implications. By embracing the duality of AI, the radiology community can adopt informed strategies at individual, institutional, and collective levels to maximize its benefits while minimizing negative impacts.

RevDate: 2025-05-05
CmpDate: 2025-04-16

Ansari N, Kumari P, Kumar R, et al (2025)

Seasonal patterns of air pollution in Delhi: interplay between meteorological conditions and emission sources.

Environmental geochemistry and health, 47(5):175.

Air pollution (AP) poses a significant public health risk, particularly in developing countries, where it contributes to a growing prevalence of health issues. This study investigates seasonal variations in key air pollutants, including particulate matter, nitrogen dioxide (NO2), sulfur dioxide (SO2), carbon monoxide (CO), and ozone (O3), in New Delhi during 2024. Utilizing Sentinel-5 satellite data processed through the Google earth engine (GEE), a cloud-based geospatial analysis platform, the study evaluates pollutant dynamics during pre-monsoon and post-monsoon seasons. The methodology involved programming in JavaScript to extract pollution parameters, applying cloud filters to eliminate contaminated data, and generating average pollution maps at monthly, seasonal, and annual intervals. The results revealed distinct seasonal pollution patterns. Pre-monsoon root mean square error (RMSE) values for CO, NO2, SO2, and O3 were 0.13, 2.58, 4.62, and 2.36, respectively, while post-monsoon values were 0.17, 2.41, 4.31, and 4.60. Winter months exhibited the highest pollution levels due to increased emissions from biomass burning, vehicular activity, and industrial operations, coupled with atmospheric inversions. Conversely, monsoon months saw a substantial reduction in pollutant levels due to wet deposition and improved dispersion driven by stronger winds. Additionally, post-monsoon crop residue burning emerged as a major episodic pollution source. This study underscores the utility of Sentinel-5 products in monitoring urban air pollution and provides valuable insights for policymakers to develop targeted mitigation strategies, particularly for urban megacities like Delhi, where seasonal and source-specific interventions are crucial for reducing air pollution and its associated health risks.

RevDate: 2025-04-16

Zao JK, Wu JT, Kanyimbo K, et al (2024)

Design of a Trustworthy Cloud-Native National Digital Health Information Infrastructure for Secure Data Management and Use.

Oxford open digital health, 2:oqae043.

Since 2022, Malawi Ministry of Health (MoH) designated the development of a National Digital Health Information System (NDHIS) as one of the most important pillars of its national health strategy. This system is built upon a distributed computing infrastructure employing the following state-of-art technologies: (i) digital healthcare devices to capture medical data; (ii) Kubernetes-based Cloud-Native Computing architecture to simplify system management and service deployment; (iii) Zero-Trust Secure Communication to protect confidentiality, integrity and access rights of medical data transported over the Internet; (iv) Trusted Computing to allow medical data to be processed by certified software without compromising data privacy and sovereignty. Trustworthiness, including reliability, security, privacy and business integrity, of this system was ensured by a peer-to-peer network of trusted medical information guards deployed as the gatekeepers of the computing facility on this system. This NDHIS can facilitate Malawi to attain universal health coverage by 2030 through its scalability and operation efficiency. It shall improve medical data quality and security by adopting a paperless approach. It will also enable MoH to offer data rental services to healthcare researchers and AI model developers around the world. This project is spearheaded by the Digital Health Division (DHD) under MoH. The trustworthy computing infrastructure was designed by a taskforce assembled by the DHD in collaboration with Luke International in Norway, and a consortium of hardware and software solution providers in Taiwan. A prototype that can connect community clinics with a district hospital has been tested at Taiwan Pingtung Christian Hospital.

RevDate: 2025-05-12
CmpDate: 2025-05-12

Dessevres E, Valderrama M, M Le Van Quyen (2025)

Artificial intelligence for the detection of interictal epileptiform discharges in EEG signals.

Revue neurologique, 181(5):411-419.

INTRODUCTION: Over the past decades, the integration of modern technologies - such as electronic health records, cloud computing, and artificial intelligence (AI) - has revolutionized the collection, storage, and analysis of medical data in neurology. In epilepsy, Interictal Epileptiform Discharges (IEDs) are the most established biomarker, indicating an increased likelihood of seizures. Their detection traditionally relies on visual EEG assessment, a time-consuming and subjective process contributing to a high misdiagnosis rate. These limitations have spurred the development of automated AI-driven approaches aimed at improving accuracy and efficiency in IED detection.

METHODS: Research on automated IED detection began 45 years ago, spanning from morphological methods to deep learning techniques. In this review, we examine various IED detection approaches, evaluating their performance and limitations.

RESULTS: Traditional machine learning and deep learning methods have produced the most promising results to date, and their application in IED detection continues to grow. Today, AI-driven tools are increasingly integrated into clinical workflows, assisting clinicians in identifying abnormalities while reducing false-positive rates.

DISCUSSION: To optimize the clinical implementation of automated AI-based IED detection, it is essential to render the codes publicly available and to standardize the datasets and metrics. Establishing uniform benchmarks will enable objective model comparisons and help determine which approaches are best suited for clinical use.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )