picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
01 Aug 2025 at 01:42
HITS:
4132
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 01 Aug 2025 at 01:42 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-07-31
CmpDate: 2025-07-31

Beyer D, Delancey E, L McLeod (2025)

Automating Colon Polyp Classification in Digital Pathology by Evaluation of a "Machine Learning as a Service" AI Model: Algorithm Development and Validation Study.

JMIR formative research, 9:e67457 pii:v9i1e67457.

BACKGROUND: Artificial intelligence (AI) models are increasingly being developed to improve the efficiency of pathological diagnoses. Rapid technological advancements are leading to more widespread availability of AI models that can be used by domain-specific experts (ie, pathologists and medical imaging professionals). This study presents an innovative AI model for the classification of colon polyps, developed using AutoML algorithms that are readily available from cloud-based machine learning platforms. Our aim was to explore if such AutoML algorithms could generate robust machine learning models that are directly applicable to the field of digital pathology.

OBJECTIVE: The objective of this study was to evaluate the effectiveness of AutoML algorithms in generating robust machine learning models for the classification of colon polyps and to assess their potential applicability in digital pathology.

METHODS: Whole-slide images from both public and institutional databases were used to develop a training set for 3 classifications of common entities found in colon polyps: hyperplastic polyps, tubular adenomas, and normal colon. The AI model was developed using an AutoML algorithm from Google's VertexAI platform. A test subset of the data was withheld to assess model accuracy, sensitivity, and specificity.

RESULTS: The AI model displayed a high accuracy rate, identifying tubular adenoma and hyperplastic polyps with 100% success and normal colon with 97% success. Sensitivity and specificity error rates were very low.

CONCLUSIONS: This study demonstrates how accessible AutoML algorithms can readily be used in digital pathology to develop diagnostic AI models using whole-slide images. Such models could be used by pathologists to improve diagnostic efficiency.

RevDate: 2025-07-31

Delogu F, Aspinall C, Ray K, et al (2025)

Breaking barriers: broadening neuroscience education via cloud platforms and course-based undergraduate research.

Frontiers in neuroinformatics, 19:1608900.

This study demonstrates the effectiveness of integrating cloud computing platforms with Course-based Undergraduate Research Experiences (CUREs) to broaden access to neuroscience education. Over four consecutive spring semesters (2021-2024), a total of 42 undergraduate students at Lawrence Technological University participated in computational neuroscience CUREs using brainlife.io, a cloud-computing platform. Students conducted anatomical and functional brain imaging analyses on openly available datasets, testing original hypotheses about brain structure variations. The program evolved from initial data processing to hypothesis-driven research exploring the influence of age, gender, and pathology on brain structures. By combining open science and big data within a user-friendly cloud environment, the CURE model provided hands-on, problem-based learning to students with limited prior knowledge. This approach addressed key limitations of traditional undergraduate research experiences, including scalability, early exposure, and inclusivity. Students consistently worked with MRI datasets, focusing on volumetric analysis of brain structures, and developed scientific communication skills by presenting findings at annual research days. The success of this program demonstrates its potential to democratize neuroscience education, enabling advanced research without extensive laboratory facilities or prior experience, and promoting original undergraduate research using real-world datasets.

RevDate: 2025-07-30

Saghafi S, Kiarashi Y, Rodriguez AD, et al (2025)

Indoor Localization Using Multi-Bluetooth Beacon Deployment in a Sparse Edge Computing Environment.

Digital twins and applications, 2(1):.

Bluetooth low energy (BLE)-based indoor localization has been extensively researched due to its cost-effectiveness, low power consumption, and ubiquity. Despite these advantages, the variability of received signal strength indicator (RSSI) measurements, influenced by physical obstacles, human presence, and electronic interference, poses a significant challenge to accurate localization. In this work, we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency (RF)-dense modern building environment. Through a proof-of-concept study, we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m, whereas additional beacons offer minimal incremental benefit in such settings. Furthermore, our framework for BLE-based localization, implemented on an edge network of Raspberry Pies, has been released under an open-source license, enabling broader application and further research.

RevDate: 2025-07-30
CmpDate: 2025-07-30

Kim MG, Kil BH, Ryu MH, et al (2025)

IoMT Architecture for Fully Automated Point-of-Care Molecular Diagnostic Device.

Sensors (Basel, Switzerland), 25(14): pii:s25144426.

The Internet of Medical Things (IoMT) is revolutionizing healthcare by integrating smart diagnostic devices with cloud computing and real-time data analytics. The emergence of infectious diseases, including COVID-19, underscores the need for rapid and decentralized diagnostics to facilitate early intervention. Traditional centralized laboratory testing introduces delays, limiting timely medical responses. While point-of-care molecular diagnostic (POC-MD) systems offer an alternative, challenges remain in cost, accessibility, and network inefficiencies. This study proposes an IoMT-based architecture for fully automated POC-MD devices, leveraging WebSockets for optimized communication, enhancing microfluidic cartridge efficiency, and integrating a hardware-based emulator for real-time validation. The system incorporates DNA extraction and real-time polymerase chain reaction functionalities into modular, networked components, improving flexibility and scalability. Although the system itself has not yet undergone clinical validation, it builds upon the core cartridge and detection architecture of a previously validated cartridge-based platform for Chlamydia trachomatis and Neisseria gonorrhoeae (CT/NG). These pathogens were selected due to their global prevalence, high asymptomatic transmission rates, and clinical importance in reproductive health. In a previous clinical study involving 510 patient specimens, the system demonstrated high concordance with a commercial assay with limits of detection below 10 copies/μL, supporting the feasibility of this architecture for point-of-care molecular diagnostics. By addressing existing limitations, this system establishes a new standard for next-generation diagnostics, ensuring rapid, reliable, and accessible disease detection.

RevDate: 2025-07-30

Dong J, Tian M, Yu J, et al (2025)

DFPS: An Efficient Downsampling Algorithm Designed for the Global Feature Preservation of Large-Scale Point Cloud Data.

Sensors (Basel, Switzerland), 25(14): pii:s25144279.

This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature retention of point cloud data with computational efficiency, making it highly adaptable to the growing trend of large-scale 3D point cloud datasets. DFPS is designed with a multithreaded parallel acceleration architecture, which significantly enhances processing speed. Experimental results demonstrate that, for a point cloud dataset containing millions of points, DFPS reduces processing time from approximately 161,665 s using the original FPS method to approximately 71.64 s at a 12.5% sampling rate, achieving an efficiency improvement of over 2200 times. As the sampling rate decreases, the performance advantage becomes more pronounced: at a 3.125% sampling rate, the efficiency improves by nearly 10,000 times. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information. Notably, DFPS does not depend on GPU-based heterogeneous computing, enabling seamless deployment in resource-constrained environments such as airborne and mobile devices, which makes DFPS an effective and lightweighting tool for providing high-quality input data for subsequent algorithms, including point cloud registration and semantic segmentation.

RevDate: 2025-07-30
CmpDate: 2025-07-30

Demieville J, Dilkes B, Eveland AL, et al (2025)

High-resolution phenomics dataset collected on a field-grown, EMS-mutagenized sorghum population evaluated in hot, arid conditions.

BMC research notes, 18(1):332.

OBJECTIVES: The University of Arizona Field Scanner (FS) is capable of generating massive amounts of data from a variety of instruments at high spatial and temporal resolution. The accompanying field infrastructure beneath the system offers capacity for controlled irrigation regimes in a hot, arid environment. Approximately 194 terabytes of raw and processed phenotypic image data were generated over two growing seasons (2020 and 2022) on a population of 434 sequence-indexed, EMS-mutagenized sorghum lines in the genetic background BTx623; the population was grown under well-watered and water-limited conditions. Collectively, these data enable links between genotype and dynamic, drought-responsive phenotypes, which can accelerate crop improvement efforts. However, analysis of these data can be challenging for researchers without background knowledge of the system and preliminary processing.

DATA DESCRIPTION: This dataset contains formatted tabular data generated from sensing system outputs suitable for a wide range of end-users and includes plant-level bounding areas, temperatures, and point cloud characteristics, as well as plot-level photosynthetic parameters and accompanying weather data. The dataset includes approximately 422 megabytes of tabular data totaling 1,903,412 unique unfiltered rows of FS data, 526,917 cleaned rows of FS data, and 285 rows of weather data from the two field seasons.

RevDate: 2025-07-29

Kaneko R, Akaishi S, Ogawa R, et al (2025)

Machine Learning-based Complementary Artificial Intelligence Model for Dermoscopic Diagnosis of Pigmented Skin Lesions in Resource-limited Settings.

Plastic and reconstructive surgery. Global open, 13(7):e7004.

BACKGROUND: Rapid advancements in big data and machine learning have expanded their application in healthcare, introducing sophisticated diagnostics to settings with limited medical resources. Notably, free artificial intelligence (AI) services that require no programming skills are now accessible to healthcare professionals, allowing those in underresourced areas to leverage AI technology. This study aimed to evaluate the potential of these accessible services for diagnosing pigmented skin tumors, underscoring the democratization of advanced medical technologies.

METHODS: In this experimental diagnostic study, we collected 400 dermoscopic images (100 per tumor type) labeled through supervised learning from pathologically confirmed cases. The images were split into training, validation, and testing datasets (8:1:1 ratio) and uploaded to Vertex AI for model training. Supervised learning was performed using the Google Cloud Platform, Vertex AI, based on pathological diagnoses. The model's performance was assessed using confusion matrices and precision-recall curves.

RESULTS: The AI model achieved an average recall rate of 86.3%, precision rate of 87.3%, accuracy of 86.3%, and F1 score of 0.87. Misclassification rates were less than 20% for each category. Accuracy was 80% for malignant melanoma and 100% for both basal cell carcinoma and seborrheic keratosis. Testing on separate cases yielded an accuracy of approximately 70%.

CONCLUSIONS: The metrics obtained in this study suggest that the model can reliably assist in the diagnostic process, even for practitioners without prior AI expertise. The study demonstrated that free AI tools can accurately classify pigmented skin lesions with minimal expertise, potentially providing high-precision diagnostic support in settings lacking dermatologists.

RevDate: 2025-07-29

Zhao M, H Chen (2025)

Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing.

Entropy (Basel, Switzerland), 27(7):.

Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency.

RevDate: 2025-07-29

Robertson R, Doucet E, Spicer E, et al (2025)

Simon's Algorithm in the NISQ Cloud.

Entropy (Basel, Switzerland), 27(7):.

Simon's algorithm was one of the first to demonstrate a genuine quantum advantage in solving a problem. The algorithm, however, assumes access to fault-tolerant qubits. In our work, we use Simon's algorithm to benchmark the error rates of devices currently available in the "quantum cloud". As a main result, we objectively compare the different physical platforms made available by IBM and IonQ. Our study highlights the importance of understanding the device architectures and topologies when transpiling quantum algorithms onto hardware. For instance, we demonstrate that two-qubit operations on spatially separated qubits on superconducting chips should be avoided.

RevDate: 2025-07-29

Xue K, Jin X, Y Li (2025)

Exploring the Influence of Human-Computer Interaction Experience on Tourist Loyalty in the Context of Smart Tourism: A Case Study of Suzhou Museum.

Behavioral sciences (Basel, Switzerland), 15(7): pii:bs15070949.

As digital technology evolves rapidly, smart tourism has become a significant trend in the modernization of the industry, relying on advanced tools like big data and cloud computing to improve travelers' experiences. Despite the growing use of human-computer interaction in museums, there remains a lack of in-depth academic investigation into its impact on visitors' behavioral intentions regarding museum engagement. This paper employs Cognitive Appraisal Theory, considers human-computer interaction experience as the independent variable, and introduces destination image and satisfaction as mediators to examine their impact on destination loyalty. Based on a survey of 537 participants, the research shows that human-computer interaction experience has a significant positive impact on destination image, satisfaction, and loyalty. Destination image and satisfaction play a partial and sequential mediating role in this relationship. This paper explores the influence mechanism of human-computer interaction experience on destination loyalty and proposes practical interactive solutions for museums, aiming to offer insights for smart tourism research and practice.

RevDate: 2025-07-27

Chen R, Lin M, Chen J, et al (2025)

Reproducibility Assessment of Magnetic Resonance Spectroscopy of Pregenual Anterior Cingulate Cortex across Sessions and Vendors via the Cloud Computing Platform CloudBrain-MRS.

NeuroImage pii:S1053-8119(25)00403-3 [Epub ahead of print].

Proton magnetic resonance spectroscopy ([1]H-MRS) has potential in clinical diagnosis and understanding the mechanism of illnesses. However, its application is limited by the lack of standardization in data acquisition and processing across time points and between different magnetic resonance imaging (MRI) system vendors. This study examines whether metabolite concentrations obtained from different sessions, scanner models, and vendors can be reliably reproduced and combined for diagnostic analysis-an important consideration for rare disease research. Participants underwent magnetic resonance scanning once on two separate days within one week (one session per day, each including two [1]H-MRS scans without subject movement) on each machine. Absolute metabolite concentrations were analyzed for reliability of within- and between- session using the coefficient of variation (CV), intraclass correlation coefficient (ICC) and Bland-Altman (BA) plot, and for reproducibility across the machines using the Pearson correlation coefficient. As for within- and between- session, most of the CV values for a group of all the first or second scans of a session, and from each session were below 20%, and most of ICCs ranged from moderate (0.4≤ICC<0.59) to excellent (ICC≥0.75), which indicated high reliability. Most of the BA plots had the line of equality between 95% confidence interval of bias (mean difference), therefore the differences over scanning time could be negligible. Majority of the Pearson correlation coefficients approached 1 with statistical significance (P<0.001), showing high reproducibility across the three scanners. Additionally, the intra-vendor reproducibility was greater than the inter-vendor ones.

RevDate: 2025-07-28

He J, Ye Q, Yang Z, et al (2025)

A compact public key encryption with equality test for lattice in cloud computing.

Scientific reports, 15(1):27426 pii:10.1038/s41598-025-12018-2.

The rapid proliferation of cloud computing enables users to access computing resources and storage space over the internet, but it also presents challenges in terms of security and privacy. Ensuring the security and availability of data has become a focal point of current research when utilizing cloud computing for resource sharing, data storage, and querying. Public key encryption with equality test (PKEET) can perform an equality test on ciphertexts without decrypting them, even when those ciphertexts are encrypted under different public keys. That offers a practical approach to dividing up or searching for encrypted information directly. In order to deal with the threat raised by the rapid development of quantum computing, researchers have proposed post-quantum cryptography to guarantee the security of cloud services. However, it is challenging to implement these techniques efficiently. In this paper, a compact PKEET scheme is pro-posed. The new scheme does not encrypt the plaintext's hash value immediately but embeds it into the test trapdoor. We also demon-strated that our new construction is one-way secure under the quantum security model. With those efforts, our scheme can withstand the chosen ciphertext attacks as long as the learning with errors (LWE) assumption holds. Furthermore, we evaluated the new scheme's performance and found that it only costs approximately half the storage space compared with previous schemes. There is an almost half reduction in the computing cost throughout the encryption and decryption stages. In a nutshell, the new PKEET scheme is less costly, more compact, and applicable to cloud computing scenarios in a post-quantum environment.

RevDate: 2025-07-27

Christy C, Nirmala A, Teena AMO, et al (2025)

Machine learning based multi-stage intrusion detection system and feature selection ensemble security in cloud assisted vehicular ad hoc networks.

Scientific reports, 15(1):27058.

The development of intelligent transportation systems relies heavily on Cloud-assisted Vehicular Ad Hoc Networks (VANETs); hence, these networks must be protected. Particularly susceptible to a broad range of assaults are VANETs because of their extreme dynamism and decentralization. Connected vehicles' safety and efficiency could be compromised if these security threats materialize, leading to disastrous road accidents. Solving these issues will require an advanced Intrusion Detection System (IDS) with real-time threat recognition and neutralization capabilities. A new method for improving VANET security, a multi-stage Lightweight IntrusionDetection System Using Random Forest Algorithms (MLIDS-RFA), focuses on feature selection and ensemble models based on machine learning (ML). A multi-step approach is employed by the proposed system, with each stage dedicated to accurately detecting specific types of attacks. Regarding feature selection, MLIDS-RFA uses machine-learning approaches to enhance the detection process. The outcome is a reduction in the amount of processing overhead and a shortening of the response times. The detection abilities of ensemble models are enhanced by integrating the strengths of the Random Forest algorithm (RFA), which safeguards against intricate dangers. The practicality of the proposed technology is demonstrated by conducting thorough simulation analyses. This research demonstrates that the system can reduce false positives while maintaining high detection rates. This research ensures next-generation transport networks' secure and reliable functioning and prepares the path for VANET protection upgrades. MLIDS-RFA has improved detection accuracy (96.2%) and computing efficiency (94.8%) for dynamic VANET management. It operates well with large networks (97.8%) and adapts well to network changes (93.8%). The comprehensive methodology ensures high detection performance (95.9%) and VANET security by balancing accuracy, efficiency, and scalability.

RevDate: 2025-07-27

Punitha S, KS Preetha (2025)

Enhancing reliability and security in cloud-based telesurgery systems leveraging swarm-evoked distributed federated learning framework to mitigate multiple attacks.

Scientific reports, 15(1):27226.

Advances in robotic surgery are being driven by the convergence of technologies such as artificial intelligence (AI), 5G/6G wireless communication, the Internet of Things (IoT), and edge computing, enhancing clinical precision, speed, and real-time decision-making. However, the practical deployment of telesurgery and tele-mentoring remains constrained due to increasing cybersecurity threats, posing significant challenges to patient safety and system reliability. To address these issues, a distributed framework based on federated learning is proposed, integrating Optimized Gated Transformer Networks (OGTN) with layered chaotic encryption schemes to mitigate multiple unknown cyberattacks while preserving data privacy and integrity. The framework was implemented using TensorFlow Federated Learning Libraries (FLL) and evaluated on the UNSW-NB15 dataset. Performance was assessed using metrics including precision, accuracy, F1-score, recall, and security strength, and compared with existing approaches. In addition, structured and unstructured security assessments, including evaluations based on National Institute of Standards and Technology (NIST) recommendations, were performed to validate robustness. The proposed framework demonstrated superior performance in terms of diagnostic accuracy and cybersecurity resilience relative to conventional models. These results suggest that the framework is a viable candidate for integration into teleoperated healthcare systems, offering improved security and operational efficiency in robotic surgery applications.

RevDate: 2025-07-23

Baker J, Stricker E, Coleman J, et al (2025)

Implementing a training resource for large-scale genomic data analysis in the All of Us Researcher Workbench.

American journal of human genetics pii:S0002-9297(25)00270-8 [Epub ahead of print].

A lack of representation in genomic research and limited access to computational training create barriers for many researchers seeking to analyze large-scale genetic datasets. The All of Us Research Program provides an unprecedented opportunity to address these gaps by offering genomic data from a broad range of participants, but its impact depends on equipping researchers with the necessary skills to use it effectively. The All of Us Biomedical Researcher (BR) Scholars Program at Baylor College of Medicine aims to break down these barriers by providing early-career researchers with hands-on training in computational genomics through the All of Us Evenings with Genetics Research Program. The year-long program begins with the faculty summit, an in-person computational boot camp that introduces scholars to foundational skills for using the All of Us dataset via a cloud-based research environment. The genomics tutorials focus on genome-wide association studies (GWASs), utilizing Jupyter Notebooks and the Hail computing framework to provide an accessible and scalable approach to large-scale data analysis. Scholars engage in hands-on exercises covering data preparation, quality control, association testing, and result interpretation. By the end of the summit, participants will have successfully conducted a GWAS, visualized key findings, and gained confidence in computational resource management. This initiative expands access to genomic research by equipping early-career researchers from a variety of backgrounds with the tools and knowledge to analyze All of Us data. By lowering barriers to entry and promoting the study of representative populations, the program fosters innovation in precision medicine and advances equity in genomic research.

RevDate: 2025-07-22

Dal I, HB Kaya (2025)

Multidisciplinary Evaluation of an AI-Based Pneumothorax Detection Model: Clinical Comparison with Physicians in Edge and Cloud Environments.

Journal of multidisciplinary healthcare, 18:4099-4111.

BACKGROUND: Accurate and timely detection of pneumothorax on chest radiographs is critical in emergency and critical care settings. While subtle cases remain challenging for clinicians, artificial intelligence (AI) offers promise as a diagnostic aid. This retrospective diagnostic accuracy study evaluates a deep learning model developed using Google Cloud Vertex AI for pneumothorax detection on chest X-rays.

METHODS: A total of 152 anonymized frontal chest radiographs (76 pneumothorax, 76 normal), confirmed by computed tomography (CT), were collected from a single center between 2023 and 2024. The median patient age was 50 years (range: 18-95), with 67.1% male. The AI model was trained using AutoML Vision and evaluated in both cloud and edge deployment environments. Diagnostic accuracy metrics-including sensitivity, specificity, and F1 score-were compared with those of 15 physicians from four specialties (general practice, emergency medicine, thoracic surgery, radiology), stratified by experience level. Subgroup analysis focused on minimal pneumothorax cases. Confidence intervals were calculated using the Wilson method.

RESULTS: In cloud deployment, the AI model achieved an overall diagnostic accuracy of 0.95 (95% CI: 0.83, 0.99), sensitivity of 1.00 (95% CI: 0.83, 1.00), specificity of 0.89 (95% CI: 0.69, 0.97), and F1 score of 0.95 (95% CI: 0.86, 1.00). Comparable performance was observed in edge mode. The model outperformed junior clinicians and matched or exceeded senior physicians, particularly in detecting minimal pneumothoraces, where AI sensitivity reached 0.93 (95% CI: 0.79, 0.97) compared to 0.55 (95% CI: 0.38, 0.69) - 0.84 (95% CI: 0.69, 0.92) among human readers.

CONCLUSION: The Google Cloud Vertex AI model demonstrates high diagnostic performance for pneumothorax detection, including subtle cases. Its consistent accuracy across edge and cloud settings supports its integration as a second reader or triage tool in diverse clinical workflows, especially in acute care or resource-limited environments.

RevDate: 2025-07-21

Onur D, Ç Ã–zbakır (2025)

Pediatrics 4.0: the Transformative Impacts of the Latest Industrial Revolution on Pediatrics.

Health care analysis : HCA : journal of health philosophy and policy [Epub ahead of print].

Industry 4.0 represents the latest phase of industrial evolution, characterized by the seamless integration of cyber-physical systems, the Internet of Things, big data analytics, artificial intelligence, advanced robotics, and cloud computing, enabling smart, adaptive, and interconnected processes where physical, digital, and biological realms converge. In parallel, healthcare has progressed from the traditional, physician-centered model of Healthcare 1.0 by introducing medical devices and digitized records to Healthcare 4.0, which leverages Industry 4.0 technologies to create personalized, data-driven, and patient-centric systems. In this context, we hereby introduce Pediatrics 4.0 as a new paradigm that adapts these innovations to children's unique developmental, physiological, and ethical considerations and aims to improve diagnostic precision, treatment personalization, and continuous monitoring in pediatric populations. Key applications include AI-driven diagnostic and predictive analytics, IoT-enabled remote monitoring, big data-powered epidemiological insights, robotic assistance in surgery and rehabilitation, and 3D printing for patient-specific devices and pharmaceuticals. However, realizing Pediatrics 4.0 requires addressing significant challenges-data privacy and security, algorithmic bias, interoperability and standardization, equitable access, regulatory alignment, the ethical complexities of consent, and long-term technology exposure. Future research should focus on explainable AI, pediatric-specific device design, robust data governance frameworks, dynamic ethical and legal guidelines, interdisciplinary collaboration, and workforce training to ensure these transformative technologies translate into safer, more effective, and more equitable child healthcare.

RevDate: 2025-07-21

Parashar B, Malviya R, Sridhar SB, et al (2025)

IoT-enabled medical advances shaping the future of orthopaedic surgery and rehabilitation.

Journal of clinical orthopaedics and trauma, 68:103113.

The Internet of Things (IoT) connects smart devices to enable automation and data exchange. IoT is rapidly transforming the healthcare industry. Understanding of the framework and challenges of IoT is essential for effective implementation. This review explores the advances in IoT technology in orthopaedic surgery and rehabilitation. A comprehensive literature search was conducted by the author using databases such as PubMed, Scopus, and Google Scholar. Relevant peer-reviewed articles published between 2010 and 2024 were preferred based on their focus on IoT applications in orthopaedic surgery, rehabilitation, and assistive technologies. Keywords including "Internet of Things," "orthopaedic rehabilitation," "wearable sensors," and "smart health monitoring" were used. Studies were analysed to identify current trends, clinical relevance, and future opportunities in IoT-driven orthopaedic care. The reviewed studies demonstrate that IoT technologies, such as wearable motion sensors, smart implants, real-time rehabilitation platforms, and AI-powered analytics, have significantly improved orthopaedic surgical outcomes and patient recovery. These systems enable continuous monitoring, early complication detection, and adaptive rehabilitation. However, challenges persist in data security, device interoperability, user compliance, and standardisation across platforms. IoT holds great promise in enhancing orthopaedic surgery and rehabilitation by enabling real-time monitoring and personalised care. Moving forward, clinical validation, user-friendly designs, and strong data security will be key to its successful integration in routine practice.

RevDate: 2025-07-21

Gomase VS, Ghatule AP, Sharma R, et al (2025)

Cloud Computing Facilitating Data Storage, Collaboration, and Analysis in Global Healthcare Clinical Trials.

Reviews on recent clinical trials pii:RRCT-EPUB-149483 [Epub ahead of print].

INTRODUCTION: Healthcare data management, especially in the context of clinical trials, has been completely transformed by cloud computing. It makes it easier to store data, collaborate in real time, and perform advanced analytics across international research networks by providing scalable, secure, and affordable solutions. This paper explores how cloud computing is revolutionizing clinical trials, tackling issues including data integration, accessibility, and regulatory compliance.

MATERIALS AND METHODS: Key factors assessed include cloud platform-enabled analytical tools, collaborative features, and data storage capacity. To ensure the safe management of sensitive healthcare data, adherence to laws like GDPR and HIPAA was emphasized.

RESULTS: Real-time updates and integration of multicenter trial data were made possible by cloud systems, which also showed notable gains in collaborative workflows and data sharing. High scalability storage options reduced infrastructure expenses while upholding security requirements. Rapid interpretation of complicated datasets was made possible by sophisticated analytical tools driven by machine learning and artificial intelligence, which expedited decision-making. Improved patient recruitment tactics and flexible trial designs are noteworthy examples.

CONCLUSION: Cloud computing has become essential for international clinical trials because it provides unmatched efficiency in data analysis, communication, and storage. It is a pillar of contemporary healthcare research due to its capacity to guarantee data security and regulatory compliance as well as its creative analytical capabilities. Subsequent research ought to concentrate on further refining cloud solutions to tackle new issues and utilizing their complete capabilities in clinical trial administration.

RevDate: 2025-07-20

Yang X, Yao K, Li S, et al (2025)

A smart grid data sharing scheme supporting policy update and traceability.

Scientific reports, 15(1):26343 pii:10.1038/s41598-025-10704-9.

To address the problems of centralized attribute authority, inefficient encryption and invalid access control strategy in the data sharing scheme based on attribute-based encryption technology, a smart grid data sharing scheme that supports policy update and traceability is proposed. The smart contract of the blockchain is used to generate the user's key, which does not require a centralized attribute authority. Combined with attribute-based encryption and symmetric encryption technology, the confidentiality of smart grid data is protected and flexible data access control is achieved. In addition, online/offline encryption and outsourced computing technologies complete most of the computing tasks in the offline stage or cloud server, which greatly reduces the computing burden of data owners and data access users. By introducing the access control policy update mechanism, the data owner can flexibly modify the key ciphertext stored in the cloud server. Finally, the analysis results show that this scheme can protect the privacy of smart grid data, verify the integrity of smart grid data, resist collusion attacks and track the identity of malicious users who leak private keys, and its efficiency is better than similar data sharing schemes.

RevDate: 2025-07-20

Yin X, Zhang X, Pei L, et al (2025)

Optimization and benefit evaluation model of a cloud computing-based platform for power enterprises.

Scientific reports, 15(1):26366.

To address the challenges associated with the digital transformation of the power industry, this research develops an optimization and benefit evaluation model for cloud computing platforms tailored to power enterprises. It responds to the current lack of systematic optimization mechanisms and evaluation methods in existing cloud computing applications. The proposed model focuses on resource scheduling optimization, task load balancing, and improvements in computational efficiency. A multidimensional optimization framework is constructed, integrating key parameters such as path planning, condition coefficient computation, and the regulation of task and average loads. The model employs an improved lightweight genetic algorithm combined with an elastic resource allocation strategy to dynamically adapt to task changes across various operational scenarios. Experimental results indicate a 46% reduction in failure recovery time, a 78% improvement in high-load throughput capacity, and an average increase of nearly 60% in resource utilization. Compared with traditional on-premise architectures and static scheduling models, the proposed approach offers notable advantages in computational response time and fault tolerance. In addition, through containerized deployment and intelligent orchestration, it achieves a 43% reduction in monthly operating costs. A multi-level benefit evaluation system-spanning power generation, grid operations, and end-user services-is established, integrating historical data, expert weighting, and dynamic optimization algorithms to enable quantitative performance assessment and decision support. In contrast to existing studies that mainly address isolated functional modules such as equipment health monitoring or collaborative design, this research presents a novel paradigm characterized by architectural integration, methodological versatility, and industrial applicability. It thus addresses the empirical gap in multi-objective optimization for industrial-scale power systems. The theoretical contribution of this research lies in the establishment of a highly scalable and integrated framework for optimization and evaluation. Its practical significance is reflected in the notable improvements in operational efficiency and cost control in real-world applications. The proposed model provides a clear trajectory and quantitative foundation for promoting an efficient and intelligent cloud computing ecosystem in the power sector.

RevDate: 2025-07-18

Cao J, Yu Z, Zhu B, et al (2025)

Construction and efficiency analysis of an embedded system-based verification platform for edge computing.

Scientific reports, 15(1):26114.

With the profound convergence and advancement of the Internet of Things, big data analytics, and artificial intelligence technologies, edge computing-a novel computing paradigm-has garnered significant attention. While edge computing simulation platforms offer convenience for simulations and tests, the disparity between them and real-world environments remains a notable concern. These platforms often struggle to precisely mimic the interactive behaviors and physical attributes of actual devices. Moreover, they face constraints in real-time responsiveness and scalability, thus limiting their ability to truly reflect practical application scenarios. To address these obstacles, our study introduces an innovative physical verification platform for edge computing, grounded in embedded devices. This platform seamlessly integrates KubeEdge and Serverless technological frameworks, facilitating dynamic resource allocation and efficient utilization. Additionally, by leveraging the robust infrastructure and cloud services provided by Alibaba Cloud, we have significantly bolstered the system's stability and scalability. To ensure a comprehensive assessment of our architecture's performance, we have established a realistic edge computing testing environment, utilizing embedded devices like Raspberry Pi. Through rigorous experimental validations involving offloading strategies, we have observed impressive outcomes. The refined offloading approach exhibits outstanding results in critical metrics, including latency, energy consumption, and load balancing. This not only underscores the soundness and reliability of our platform design but also illustrates its versatility for deployment in a broad spectrum of application contexts.

RevDate: 2025-07-18

C BS, St B, S S (2025)

Achieving cloud resource optimization with trust-based access control: A novel ML strategy for enhanced performance.

MethodsX, 15:103461.

Cloud computing continues to rise, increasing the demand for more intelligent, rapid, and secure resource management. This paper presents AdaPCA-a novel method that integrates the adaptive capabilities of AdaBoost with the dimensionality-reduction efficacy of PCA. What is the objective? Enhance trust-based access control and resource allocation decisions while maintaining a minimal computational burden. High-dimensional trust data frequently hampers systems; however, AdaPCA mitigates this issue by identifying essential aspects and enhancing learning efficacy concurrently. To evaluate its performance, we conducted a series of simulations comparing it with established methods such as Decision Trees, Random Forests, and Gradient Boosting. We assessed execution time, resource use, latency, and trust accuracy. Results show that AdaPCA achieved a trust score prediction accuracy of 99.8 %, a resource utilization efficiency of 95 %, and reduced allocation time to 140 ms, outperforming the benchmark models across all evaluated parameters. AdaPCA had superior performance overall-expedited decision-making, optimized resource utilization, reduced latency, and the highest accuracy in trust evaluation among the evaluated models. AdaPCA is not merely another model; it represents a significant advancement towards more intelligent and safe cloud systems designed for the future.•Introduces AdaPCA, a novel hybrid approach that integrates AdaBoost with PCA to optimize cloud resource allocation and improve trust-based access control.•Outperforms conventional techniques such as Decision Tree, Random Forest, and Gradient Boosting by attaining superior trust accuracy, expedited execution, enhanced resource utilization, and reduced latency.•Presents an intelligent, scalable, and adaptable architecture for secure and efficient management of cloud resources, substantiated by extensive simulation experiments.

RevDate: 2025-07-18
CmpDate: 2025-07-18

Zhao N, Wang B, Wang ZH, et al (2025)

[Spatiotemporal Evolution of Ecological Environment Quality and Ecological Management Zoning in Inner Mongolia Based on RSEI].

Huan jing ke xue= Huanjing kexue, 46(7):4499-4509.

Inner Mongolia serves as a crucial ecological security barrier for northern China. Examining the spatial and temporal evolution of ecological environment quality, along with the zoning for ecological management, is crucial for enhancing the management and development of ecological environments. Based on the Google Earth Engine cloud platform, four indicators-heat, greenness, dryness, and wetness-were extracted from MODIS remote sensing image data spanning 2000 to 2023. The remote sensing ecological index (RESI) model was constructed using principal component analysis. By combining the coefficient of variation (CV), Sen + Mann-Kendall, and Hurst indices, the spatial and temporal variations and future trends of ecological environmental quality of the Inner Mongolia were analyzed. The influencing mechanisms were explored using a geographical detector, and the quadrant method was employed for ecological management zoning based on the intensity of human activities and the quality of the ecological environment. The results indicated that: ① The ecological environment quality of Inner Mongolia from 2000 to 2023 was mainly characterized as poor to average, with a spatial trend of decreasing quality from east to west. From 2000 to 2005, Inner Mongolia experienced environmental degradation, followed by a gradual improvement in ecological environment quality. ② Inner Mongolia exhibited the largest area of non-significantly improved and non-significantly degraded regions, and the overall environmental quality was more stable. However, ecosystems in the western region were more fragile and prone to fluctuations. The area of sustained degradation versus sustained improvement in the future trend of change was larger, and the western region is expected to be the main area of improvement in the future. ③ The results of single-factor detection showed that the influences on RSEI values were, in descending order, precipitation, soil type, land use type, air temperature, vegetation type, elevation, population density, GDP, and nighttime lighting; the interactions among driving factors on RSEI changes showed a bivariate or nonlinear enhancement, which suggests that the interactions of each driving factor could improve the explanatory power of spatial variations in ecological environment quality. ④ Based on the coupling of human activity intensity and ecological environment quality, the 12 league cities of Inner Mongolia were divided into ecological development coordination zones, ecological development reserves, and ecological development risk zones. This study can provide a scientific basis for ecological environmental protection and sustainable development in Inner Mongolia.

RevDate: 2025-07-18

Yang M, Liu EQ, Yang Y, et al (2025)

[Quantitative Analysis of Wetland Evolution Characteristics and Driving Factors in Ruoergai Plateau Based on Landsat Time Series Remote Sensing Images].

Huan jing ke xue= Huanjing kexue, 46(7):4461-4472.

The Ruoergai Wetland, China's largest high-altitude marsh, plays a crucial role in the carbon cycle and climate management. However, the Ruoergai Wetland has experienced significant damage as a result of human activity and global warming. Based on the Google Earth Engine (GEE) cloud platform and time-series Landsat images, a random forest algorithm was applied to produce a detailed classification map of the Ruoergai wetlands from 1990 to 2020. Through the transfer matrix and landscape pattern index, the spatiotemporal change law and change trend of wetlands were analyzed. Then, the influencing factors of wetland distribution were quantitatively analyzed using geographic detector. The results showed that: ① The total wetland area averaged 3 910 km[2] from 1990 to 2020, dominated by marshy and wet meadows, accounting for 83.13% of the total wetland area. From 1990 to 2010, the wetland area of Ruoergai showed a decreasing trend, and from 2010 to 2020, the wetland area increased slightly. ② From 1990 to 2020, the decrease in wetland area was mainly reflected in the degradation of wet meadows into alpine grassland. There were also changes among different wetland types, which were mainly reflected in the conversion of marsh meadows and wet meadows. ③ From 1990 to 2010, the wetland landscape tended to be fragmented and complicated, and the aggregation degree decreased. From 2010 to 2020, wetland fragmentation decreased, and the wetland landscape became more concentrated. ④ Slope, temperature, and aspect were the main natural factors affecting wetland distribution. At the same time, population density has gradually become a significant social and economic factor affecting wetland distribution. The results can provide scientific support for the wetland protection planning of Ruoergai and serve for the ecological preservation and high-level development of the area.

RevDate: 2025-07-15

Narasimha Raju AS, Venkatesh K, Rajababu M, et al (2025)

Colorectal cancer unmasked: A synergistic AI framework for Hyper-granular image dissection, precision segmentation, and automated diagnosis.

BMC medical imaging, 25(1):283.

Colorectal cancer (CRC) is the second most common cause of cancer-related mortality worldwide, underscoring the necessity for computer-aided diagnosis (CADx) systems that are interpretable, accurate, and robust. This study presents a practical CADx system that combines Vision Transformers (ViTs) and DeepLabV3 + to accurately identify and segment colorectal lesions in colonoscopy images.The system addresses class balance and real-world complexity with PCA-based dimensionality reduction, data augmentation, and strategic preprocessing using recently curated CKHK-22 dataset comprising more than 14,000 annotated images of CVC-ClinicDB, Kvasir-2, and Hyper-Kvasir. ViT, ResNet-50, DenseNet-201, and VGG-16 were used to quantify classification performance. ViT achieved best-in-class accuracy (97%), F1-score (0.95), and AUC (92%) in test data. The DeepLabV3 + achieved segmentation state-of-the-art for tasks of localisation with 0.88 Dice Coefficient and 0.71 Intersection over Union (IoU), ensuring sharp delineation of areas that are malignant. The CADx system accommodates real-time inference and served through Google Cloud for information that accommodates scalable clinical implementation. The image-level segmentation effectiveness is evidenced by comparison with visual overlay and expert-manually deliminated masks, and its precision is illustrated by computation of precision, recall, F1-score, and AUC. The hybrid strategy not only outperforms traditional CNN strategies but also overcomes important clinical needs such as detection early, balance of highly disparate classes, and clear explanation. The proposed ViT-DeepLabV3 + system establishes a basis for advanced AI support to colorectal diagnosis by utilizing self-attention strategies and learning with different scales of context. The system offers a high-capacity, reproducible computerised colorectal cancer screening and monitoring solution and can be best deployed where resources are scarce, and it can be highly desirable for clinical deployment.

RevDate: 2025-07-15

Islam U, Alatawi MN, Alqazzaz A, et al (2025)

A hybrid fog-edge computing architecture for real-time health monitoring in IoMT systems with optimized latency and threat resilience.

Scientific reports, 15(1):25655 pii:10.1038/s41598-025-09696-3.

The advancement of the Internet of Medical Things (IoMT) has transformed healthcare delivery by enabling real-time health monitoring. However, it introduces critical challenges related to latency and, more importantly, the secure handling of sensitive patient data. Traditional cloud-based architectures often struggle with latency and data protection, making them inefficient for real-time healthcare scenarios. To address these challenges, we propose a Hybrid Fog-Edge Computing Architecture tailored for effective real-time health monitoring in IoMT systems. Fog computing enables processing of time-critical data closer to the data source, reducing response time and relieving cloud system overload. Simultaneously, edge computing nodes handle data preprocessing and transmit only valuable information-defined as abnormal or high-risk health signals such as irregular heart rate or oxygen levels-using rule-based filtering, statistical thresholds, and lightweight machine learning models like Decision Trees and One-Class SVMs. This selective transmission optimizes bandwidth without compromising response quality. The architecture integrates robust security measures, including end-to-end encryption and distributed authentication, to counter rising data breaches and unauthorized access in IoMT networks. Real-life case scenarios and simulations are used to validate the model, evaluating latency reduction, data consolidation, and scalability. Results demonstrate that the proposed architecture significantly outperforms cloud-only models, with a 70% latency reduction, 30% improvement in energy efficiency, and 60% bandwidth savings. Additionally, the time required for threat detection was halved, ensuring faster response to security incidents. This framework offers a flexible, secure, and efficient solution ideal for time-sensitive healthcare applications such as remote patient monitoring and emergency response systems.

RevDate: 2025-07-15

Khaldy MAA, Nabot A, Al-Qerem A, et al (2025)

Adaptive conflict resolution for IoT transactions: A reinforcement learning-based hybrid validation protocol.

Scientific reports, 15(1):25589.

This paper introduces a novel Reinforcement Learning-Based Hybrid Validation Protocol (RL-CC) that revolutionizes conflict resolution for time-sensitive IoT transactions through adaptive edge-cloud coordination. Efficient transaction management in sensor-based systems is crucial for maintaining data integrity and ensuring timely execution within the constraints of temporal validity. Our key innovation lies in dynamically learning optimal scheduling policies that minimize transaction aborts while maximizing throughput under varying workload conditions. The protocol consists of two validation phases: an edge validation phase, where transactions undergo preliminary conflict detection and prioritization based on their temporal constraints, and a cloud validation phase, where a final conflict resolution mechanism ensures transactional correctness on a global scale. The RL-based mechanism continuously adapts decision-making by learning from system states, prioritizing transactions, and dynamically resolving conflicts using a reward function that accounts for key performance parameters, including the number of conflicting transactions, cost of aborting transactions, temporal validity constraints, and system resource utilization. Experimental results demonstrate that our RL-CC protocol achieves a 90% reduction in transaction abort rates (5% vs. 45% for 2PL), 3x higher throughput (300 TPS vs. 100 TPS), and 70% lower latency compared to traditional concurrency control methods. The proposed RL-CC protocol significantly reduces transaction abort rates, enhances concurrency management, and improves the efficiency of sensor data processing by ensuring that transactions are executed within their temporal validity window. The results suggest that the RL-based approach offers a scalable and adaptive solution for sensor-based applications requiring high-concurrency transaction processing, such as Internet of Things (IoT) networks, real-time monitoring systems, and cyber-physical infrastructures.

RevDate: 2025-07-15

Contaldo SG, d'Acierno A, Bosio L, et al (2025)

Long-read microbial genome assembly, gene prediction and functional annotation: a service of the MIRRI ERIC Italian node.

Frontiers in bioinformatics, 5:1632189.

BACKGROUND: Understanding the structure and function of microbial genomes is crucial for uncovering their ecological roles, evolutionary trajectories, and potential applications in health, biotechnology, agriculture, food production, and environmental science. However, genome reconstruction and annotation remain computationally demanding and technically complex.

RESULTS: We introduce a bioinformatics platform designed explicitly for long-read microbial sequencing data to address these challenges. Developed as a service of the Italian MIRRI ERIC node, the platform provides a comprehensive solution for analyzing both prokaryotic and eukaryotic genomes, from assembly to functional protein annotation. It integrates state-of-the-art tools (e.g., Canu, Flye, BRAKER3, Prokka, InterProScan) within a reproducible, scalable workflow built on the Common Workflow Language and accelerated through high-performance computing infrastructure. A user-friendly web interface ensures accessibility, even for non-specialists.

CONCLUSION: Through case studies involving three environmentally and clinically significant microorganisms, we demonstrate the ability of the platform to produce reliable, biologically meaningful insights, positioning it as a valuable tool for routine genome analysis and advanced microbial research.

RevDate: 2025-07-15

Georgiou D, Katsaounis S, Tsanakas P, et al (2025)

Towards a secure cloud repository architecture for the continuous monitoring of patients with mental disorders.

Frontiers in digital health, 7:1567702.

INTRODUCTION: Advances in Information Technology are transforming healthcare systems, with a focus on improving accessibility, efficiency, resilience, and service quality. Wearable devices such as smartwatches and mental health trackers enable continuous biometric data collection, offering significant potential to enhance chronic disorder treatment and overall healthcare quality. However, these technologies introduce critical security and privacy risks, as they handle sensitive patient data.

METHODS: To address these challenges, this paper proposes a security-by-design cloud-based architecture that leverages wearable body sensors for continuous patient monitoring and mental disorder prediction. The system integrates an Elasticsearch-powered backend to manage biometric data securely. A dedicated framework was developed to ensure confidentiality, integrity, and availability (CIA) of patient data through secure communication protocols and privacy-preserving mechanisms.

RESULTS: The proposed architecture successfully enables secure real-time biometric monitoring and data processing from wearable devices. The system is designed to operate 24/7, ensuring robust performance in continuously tracking both mental and physiological health indicators. The inclusion of Elasticsearch provides scalable and efficient data indexing and retrieval, supporting timely healthcare decisions.

DISCUSSION: This work addresses key security and privacy challenges inherent in continuous biometric data collection. By incorporating a security-by-design approach, the proposed framework enhances trustworthiness in healthcare monitoring technologies. The solution demonstrates the feasibility of balancing real-time health monitoring needs with stringent data protection requirements.

RevDate: 2025-07-14

Owuor CD, Tesfaye B, Wakem AYD, et al (2025)

Visualization of the Evolution and Transmission of Circulating Vaccine-Derived Poliovirus (cVDPV) Outbreaks in the African Region.

Bio-protocol, 15(13):e5376.

Since the creation of the Global Polio Eradication Initiative (GPEI) in 1988, significant progress has been made toward attaining a poliovirus-free world. This has resulted in the eradication of wild poliovirus (WPV) serotypes two (WPV2) and three (WPV3) and limited transmission of serotype one (WPV1) in Pakistan and Afghanistan. However, the increased emergence of circulating vaccine-derived poliovirus (cVDPV) and the continued circulation of WPV1, although limited to two countries, pose a continuous threat of international spread of poliovirus. These challenges highlight the need to further strengthen surveillance and outbreak responses, particularly in the African Region (AFRO). Phylogeographic visualization tools may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective supplementary immunization activities and improved outbreak response and surveillance. We created a comprehensive protocol for the phylogeographic analysis of polioviruses using Nextstrain, a powerful open-source tool for real-time interactive visualization of virus sequencing data. It is expected that this protocol will support poliovirus elimination strategies in AFRO and contribute significantly to global eradication strategies. These tools have been utilized for other pathogens of public health importance, for example, SARS-CoV-2, human influenza, Ebola, and Mpox, among others, through real-time tracking of pathogen evolution (https://nextstrain.org), harnessing the scientific and public health potential of pathogen genome data. Key features • Employs Nextstrain (https://nextstrain.org), which is an open-source tool for real-time interactive visualization of genome sequencing datasets. • First comprehensive protocol for the phylogeographic analysis of poliovirus sequences collected from countries in the World Health Organization (WHO) African Region (AFRO). • Phylogeographic visualization may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective vaccination campaigns. • This protocol can be deployed locally on a personal computer or on a Microsoft Azure cloud server for high throughput.

RevDate: 2025-07-12

Shyam Sundar Bhuvaneswari VS, M Thangamuthu (2025)

Towards Intelligent Safety: A Systematic Review on Assault Detection and Technologies.

Sensors (Basel, Switzerland), 25(13): pii:s25133985.

This review of literature discusses the use of emerging technologies in the prevention of assault, specifically Artificial Intelligence (AI), the Internet of Things (IoT), and wearable technologies. In preventing assaults, GIS-based mobile apps, wearable safety devices, and personal security solutions have been designed to improve personal security, especially for women and the vulnerable. The paper also analyzes interfacing networks, such as edge computing, cloud databases, and security frameworks required for emergency response solutions. In addition, we introduced a framework that brings these technologies together to deliver an effective response system. This review seeks to identify gaps currently present, ascertain major challenges, and suggest potential directions for enhanced personal security with the use of technology.

RevDate: 2025-07-12

Roumeliotis AJ, Myritzis E, Kosmatos E, et al (2025)

Multi-Area, Multi-Service and Multi-Tier Edge-Cloud Continuum Planning.

Sensors (Basel, Switzerland), 25(13): pii:s25133949.

This paper presents the optimal planning of multi-area, multi-service, and multi-tier edge-cloud environments. The goal is to evaluate the regional deployment of the compute continuum, i.e., the type and number of processing devices, their pairing with a specific tier and task among different areas subject to processing, rate, and latency requirements. Different offline compute continuum planning approaches are investigated and detailed analysis related to various design choices is depicted. We study one scheme using all tasks at once and two others using smaller task batches. The latter both iterative schemes finish once all task groups have been traversed. Group-based approaches are presented as dealing with potentially excessive execution times for real-world sized problems. Solutions are provided for continuum planning using both direct complex and simpler, faster methods. Results show that processing all tasks simultaneously yields better performance but requires longer execution, while medium-sized batches achieve good performance faster. Thus, the batch-oriented schemes are capable of handling larger problem sizes. Moreover, the task selection strategy in group-based schemes influences the performance. A more detailed analysis is performed in the latter case, and different clustering methods are also considered. Based on our simulations, random selection of tasks in group-based approaches achieves better performance in most cases.

RevDate: 2025-07-10

Ahmmad J, El-Wahed Khalifa HA, Waqas HM, et al (2025)

Ranking data privacy techniques in cloud computing based on Tamir's complex fuzzy Schweizer-Sklar aggregation approach.

Scientific reports, 15(1):24943 pii:10.1038/s41598-025-09557-z.

In the era of cloud computing, it has become an important challenge to secure data privacy by storing and processing massive amounts of sensitive information in shared environments. Cloud platforms have become a necessary component for managing personal, commercial, and governmental data. Thus, the demand for effective data privacy techniques within cloud security frameworks has increased. Data privacy is no longer just an exercise in compliance but rather to reassure stakeholders and protect precious information from cyber-attacks. The decision-making (DM) landscape in the case of cloud providers, therefore, is extremely complex because they would need to select the optimal approach among the very wide gamut of privacy techniques, which range from encryption to anonymization. A novel complex fuzzy Schweizer-Sklar aggregation approach can rank and prioritize data privacy techniques and is particularly suitable for cloud settings. Our method can easily deal with uncertainties and multi-dimensional aspects of privacy evaluation. In this manuscript, first, we introduce the fundamental Schweizer-Sklar operational laws for a cartesian form of complex fuzzy framework. Then relying on these operational laws, we have initiated the notions of cartesian form of complex fuzzy Schweizer-Sklar power average and complex fuzzy Schweizer-Sklar power geometric AOs. We have developed the main properties related to these notions like Idempotency, Boundedness, and monotonicity. Also, we explored an algorithm for the utilization of the developed theory. Moreover, we provided an illustrative example and case study for the developed theory to show the ranking of data privacy techniques in cloud computing. At the end of the manuscript, we discuss the comparative analysis to show the supremacy of the introduced work.

RevDate: 2025-07-10

Adabi V, Etedali HR, Azizian A, et al (2025)

Aqua-MC as a simple open access code for uncountable runs of AquaCrop.

Scientific reports, 15(1):24975.

Understanding uncertainty in crop modeling is essential for improving prediction accuracy and decision-making in agricultural management. Monte Carlo simulations are widely used for uncertainty and sensitivity analysis, but their application to closed-source models like AquaCrop presents significant challenges due to the lack of direct access to source code. This study introduces Aqua-MC, an automated framework designed to facilitate Monte Carlo simulations in AquaCrop by integrating probabilistic parameter selection, iterative execution, and uncertainty quantification within a structured workflow. To demonstrate its effectiveness, Aqua-MC was applied to wheat yield modeling in Qazvin, Iran, where parameter uncertainty was assessed using 3000 Monte Carlo simulations. The DYNIA (Dynamic Identifiability Analysis) method was employed to evaluate the time-dependent sensitivity of 47 model parameters, providing insights into the temporal evolution of parameter influence. The results revealed that soil evaporation and yield predictions exhibited the highest uncertainty, while transpiration and biomass outputs were more stable. The study also highlighted that many parameters had low impact, suggesting that reducing the number of free parameters could enhance model efficiency. Despite its advantages, Aqua-MC has some limitations, including its computational intensity and reliance on the GLUE method, which may overestimate uncertainty bounds. To improve applicability, future research should focus on parallel computing, cloud-based execution, integration with machine learning techniques, and expanding Aqua-MC to multi-crop studies. By overcoming the limitations of closed-source models, Aqua-MC provides a scalable and efficient solution for performing large-scale uncertainty analysis in crop modeling.

RevDate: 2025-07-10
CmpDate: 2025-07-10

AlArnaout Z, Zaki C, Kotb Y, et al (2025)

Exploiting heart rate variability for driver drowsiness detection using wearable sensors and machine learning.

Scientific reports, 15(1):24898.

Driver drowsiness is a critical issue in transportation systems and a leading cause of traffic accidents. Common factors contributing to accidents include intoxicated driving, fatigue, and sleep deprivation. Drowsiness significantly impairs a driver's response time, awareness, and judgment. Implementing systems capable of detecting and alerting drivers to drowsiness is therefore essential for accident prevention. This paper examines the feasibility of using heart rate variability (HRV) analysis to assess driver drowsiness. It explores the physiological basis of HRV and its correlation with drowsiness. We propose a system model that integrates wearable devices equipped with photoplethysmography (PPG) sensors, transmitting data to a smartphone and then to a cloud server. Two novel algorithms are developed to segment and label features periodically, predicting drowsiness levels based on HRV derived from PPG signals. The proposed approach is evaluated using real-driving data and supervised machine learning techniques. Six classification algorithms are applied to labeled datasets, with performance metrics such as accuracy, precision, recall, F1-score, and runtime assessed to determine the most effective algorithm for timely drowsiness detection and driver alerting. Our results demonstrate that the Random Forest (RF) classifier achieves the highest testing accuracy (86.05%), precision (87.16%), recall (93.61%), and F1-score (89.02%) with the smallest mean change between training and testing datasets (-4.30%), highlighting its robustness for real-world deployment. The Support Vector Machine with Radial Basis Function (SVM-RBF) also shows strong generalization performance, with a testing F1-score of 87.15% and the smallest mean change of -3.97%. These findings suggest that HRV-based drowsiness detection systems can be effectively integrated into Advanced Driver Assistance Systems (ADAS) to enhance driver safety by providing timely alerts, thereby reducing the risk of accidents caused by drowsiness.

RevDate: 2025-07-09

Feng K, D Haridas (2025)

A unified model integrating UTAUT-Behavioural intension and Object-Oriented approaches for sustainable adoption of Cloud-Based collaborative platforms in higher education.

Scientific reports, 15(1):24767.

In recent years, cloud computing (CC) services have expanded rapidly, with platforms like Google Drive, Dropbox and Apple iCloud and gaining global adoption. This study evolves a predictive model to identify the key factors that influencing Jordanian academics' behavioral intention to adopt sustainable cloud-based collaborative systems (SCBCS). By integrating Unified Theory of Acceptance and Use of Technology (UTAUT) along with system design methodologies, we put forward a comprehensive research model to improve the adoption and efficiency of SCBCS in developing countries. By using cross-sectional data from 500 professors in Jordanian higher education institutions, we adapt and extend the UTAUT model to describe behavioral intention and also assess its impact on teaching and learning processes. Both exploratory and confirmatory analyses exhibits that expanded UTAUT model significantly improves the variance explained in behavioral intention. This Study key findings reveal that behavioral control, effort expectancy and social influence significantly impact attitudes towards using cloud services and also contributes to sustainable development goals by promoting the adoption of energy-efficient and resource-optimized cloud-based platforms in higher education. The findings provide actionable insights for policymakers and educators to improve sustainable technology adoption in developing countries, ultimately improving the quality and sustainability of educational processes.

RevDate: 2025-07-09
CmpDate: 2025-07-09

Wang Z, Ding T, Liang S, et al (2025)

Workpiece surface defect detection based on YOLOv11 and edge computing.

PloS one, 20(7):e0327546 pii:PONE-D-25-06752.

The rapid development of modern industry has significantly raised the demand for workpieces. To ensure the quality of workpieces, workpiece surface defect detection has become an indispensable part of industrial production. Most workpiece surface defect detection technologies rely on cloud computing. However, transmitting large volumes of data via wireless networks places substantial computational burdens on cloud servers, significantly reducing defect detection speed. Therefore, to enable efficient and precise detection, this paper proposes a workpiece surface defect detection method based on YOLOv11 and edge computing. First, the NEU-DET dataset was expanded using random flipping, cropping, and the self-attention generative adversarial network (SA-GAN). Then, the accuracy indicators of the YOLOv7-YOLOv11 models were compared on NEU-DET and validated on the Tianchi aluminium profile surface defect dataset. Finally, the cloud-based YOLOv11 model, which achieved the highest accuracy, was converted to the edge-based YOLOv11-RKNN model and deployed on the RK3568 edge device to improve the detection speed. Results indicate that YOLOv11 with SA-GAN achieved mAP@0.5 improvements of 7.7%, 3.1%, 5.9%, and 7.0% over YOLOv7, YOLOv8, YOLOv9, and YOLOv10, respectively, on the NEU-DET dataset. Moreover, YOLOv11 with SA-GAN achieved an 87.0% mAP@0.5 on the Tianchi aluminium profile surface defect dataset, outperforming the other models again. This verifies the generalisability of the YOLOv11 model. Additionally, quantising and deploying YOLOv11 on the edge device reduced its size from 10,156 kB to 4,194 kB and reduced its single-image detection time from 52.1ms to 33.6ms, which represents a significant efficiency enhancement.

RevDate: 2025-07-09
CmpDate: 2025-07-09

Park J, Lee S, Park G, et al (2025)

Mental health help-seeking behaviours of East Asian immigrants: a scoping review.

European journal of psychotraumatology, 16(1):2514327.

Background: The global immigrant population is increasing annually, and Asian immigrants have a substantial representation within the immigrant population. Due to a myriad of challenges such as acculturation, discrimination, language, and financial issues, immigrants are at high risk of mental health conditions. However, a large-scale mapping of the existing literature regarding these issues has yet to be completed.Objective: This study aimed to investigate the mental health conditions, help-seeking behaviours, and factors affecting mental health service utilization among East Asian immigrants residing in Western countries.Method: This study adopted the scoping review methodology based on the Joanna Briggs Institute framework. A comprehensive database search was conducted in May 2024 in PubMed, CINAHL, Embase, Cochrane, and Google Scholar. Search terms were developed based on participants, concept, context framework. The participants were East Asian immigrants and their families, and the concept of interest was mental health help-seeking behaviours and mental health service utilization. Regarding the context, studies targeting East Asian immigrants in Western countries were included. Data were summarized narratively and presented in a tabular and word cloud format.Results: Out of 1990 studies, 31 studies were included. East Asian immigrants often face mental health conditions, including depression, anxiety, and suicidal behaviours. They predominantly sought help from informal sources such as family, friends, religion, and complementary or alternative medicine, rather than from formal sources such as mental health clinics or healthcare professionals. Facilitators of seeking help included recognizing the need for professional help, experiencing severe symptoms, higher levels of acculturation, longer length of stay in the host country. Barriers included stigma, cultural beliefs, and language barriers.Conclusions: The review emphasizes the need for culturally tailored interventions to improve mental health outcomes in this vulnerable population. These results can guide future research and policymaking to address mental health disparities in immigrant communities.

RevDate: 2025-07-07

Ran S, Guo Y, Liu Y, et al (2025)

A 4×256 Gbps silicon transmitter with on-chip adaptive dispersion compensation.

Nature communications, 16(1):6268.

The exponential growth of data traffic propelled by cloud computing and artificial intelligence necessitates advanced optical interconnect solutions. While wavelength division multiplexing (WDM) enhances optical module transmission capacity, chromatic dispersion becomes a critical limitation as single-lane rates exceed 200 Gbps. Here we demonstrate a 4-channel silicon transmitter achieving 1 Tbps aggregate data rate through integrated adaptive dispersion compensation. This transmitter utilizes Mach-Zehnder modulators with adjustable input intensity splitting ratios, enabling precise control over the chirp magnitude and sign to counteract specific dispersion. At 1271 nm (-3.99 ps/nm/km), the proposed transmitter enabled 4 × 256 Gbps transmission over 5 km fiber, achieving bit error ratio below both the soft-decision forward-error correction threshold with feed-forward equalization (FFE) alone and the hard-decision forward-error correction threshold when combining FFE with maximum-likelihood sequence detection. Our results highlight a significant leap towards scalable, energy-efficient, and high-capacity optical interconnects, underscoring its potential in future local area network WDM applications.

RevDate: 2025-07-05
CmpDate: 2025-07-05

Damera VK, Cheripelli R, Putta N, et al (2025)

Enhancing remote patient monitoring with AI-driven IoMT and cloud computing technologies.

Scientific reports, 15(1):24088.

The rapid advancement of the Internet of Medical Things (IoMT) has revolutionized remote healthcare monitoring, enabling real-time disease detection and patient care. This research introduces a novel AI-driven telemedicine framework that integrates IoMT, cloud computing, and wireless sensor networks for efficient healthcare monitoring. A key innovation of this study is the Transformer-based Self-Attention Model (TL-SAM), which enhances disease classification by replacing conventional convolutional layers with transformer layers. The proposed TL-SAM framework effectively extracts spatial and spectral features from patient health data, optimizing classification accuracy. Furthermore, the model employs an Improved Wild Horse Optimization with Levy Flight Algorithm (IWHOLFA) for hyperparameter tuning, enhancing its predictive performance. Real-time biosensor data is collected and transmitted to an IoMT cloud repository, where AI-driven analytics facilitate early disease diagnosis. Extensive experimentation on the UCI dataset demonstrates the superior accuracy of TL-SAM compared to conventional deep learning models, achieving an accuracy of 98.62%, precision of 97%, recall of 98%, and F1-score of 97%. The study highlights the effectiveness of AI-enhanced IoMT systems in reducing healthcare costs, improving early disease detection, and ensuring timely medical interventions. The proposed approach represents a significant advancement in smart healthcare, offering a scalable and efficient solution for remote patient monitoring and diagnosis.

RevDate: 2025-07-04

Cabello J, Escudero-Clares M, Martos-Rosillo S, et al (2025)

A dataset on potentially groundwater-dependent vegetation in the Sierra Nevada Protected Area (Southern Spain) and its underlying NDVI-derived ecohydrological attributes.

Data in brief, 61:111760.

This dataset provides a spatially explicit classification of potentially groundwater-dependent vegetation (pGDV) in the Sierra Nevada Protected Area (Southern Spain), generated using Sentinel-2 imagery (2019-2023) and ecohydrological attributes derived from NDVI time series. NDVI metrics were calculated from cloud- and snow-filtered Sentinel-2 Level 2A images processed in Google Earth Engine. Monthly NDVI values were used to extract three ecohydrological indicators: dry-season NDVI, dry-wet seasonal NDVI difference, and interannual NDVI variability. Based on quartile classifications of these indicators, 64 ecohydrological vegetation classes were defined. These were further clustered into three levels of potential groundwater dependence using hierarchical clustering techniques, differentiating between alpine and lower-elevation aquifer zones. The dataset includes raster layers (GeoTIFF) of the ecohydrological classes and pGDV types at 10 m spatial resolution, a CSV file with descriptive statistics for each class, and complete metadata. All spatial layers are projected in ETRS89 / UTM Zone 30N (EPSG: 25830) and are ready for visualization and analysis in standard GIS platforms. Partial validation of the classification was performed using spring location data and the distribution of hygrophilous plant species from official conservation databases. This available dataset enables reproducible analysis of vegetation-groundwater relationships in dryland mountain ecosystems. It supports comparative research across regions, facilitates the study of groundwater buffering effects on vegetation function, and offers a transferable framework for ecohydrological classification based on remote sensing. The data can be reused to inform biodiversity conservation, groundwater management, and climate change adaptation strategies in the Mediterranean and other water-limited mountain regions.

RevDate: 2025-07-02

Xing S, Sun A, Wang C, et al (2025)

Seamless optical cloud computing across edge-metro network for generative AI.

Nature communications, 16(1):6097.

The rapid advancement of generative artificial intelligence (AI) in recent years has profoundly reshaped modern lifestyles, necessitating a revolutionary architecture to support the growing demands for computational power. Cloud computing has become the driving force behind this transformation. However, it consumes significant power and faces computation security risks due to the reliance on extensive data centers and servers in the cloud. Reducing power consumption while enhancing computational scale remains persistent challenges in cloud computing. Here, we propose and experimentally demonstrate an optical cloud computing system that can be seamlessly deployed across edge-metro network. By modulating inputs and models into light, a wide range of edge nodes can directly access the optical computing center via the edge-metro network. The experimental validations show an energy efficiency of 118.6 mW/TOPs (tera operations per second), reducing energy consumption by two orders of magnitude compared to traditional electronic-based cloud computing solutions. Furthermore, it is experimentally validated that this architecture can perform various complex generative AI models through parallel computing to achieve image generation tasks.

RevDate: 2025-07-02
CmpDate: 2025-07-02

Meiring C, Eygelaar M, Fourie J, et al (2025)

Tick genomics through a Nanopore: a low-cost approach for tick genomics.

BMC genomics, 26(1):591.

BACKGROUND: The assembly of large and complex genomes can be costly since it typically requires the utilization of multiple sequencing technologies and access to high-performance computing, while creating a dependency on external service providers. The aim of this study was to independently generate draft genomes for the cattle ticks Rhipicephalus microplus and R. appendiculatus using Oxford Nanopore sequencing technology.

RESULTS: Exclusively, Oxford Nanopore sequence data were assembled with Shasta and finalized on the Amazon Web Services cloud platform, capitalizing on the availability of up to 90% discounted Spot instances. The assembled and polished R. microplus and R. appendiculatus genomes from our study were comparable to published tick genomes where multiple sequencing technologies and costly bioinformatic resources were utilized that are not readily accessible to low-resource environments. We predicted 52,412 genes for R. appendiculatus, with 31,747 of them being functionally annotated. The R. microplus annotation consisted of 60,935 predicted genes, with 32,263 being functionally annotated in the final file. The sequence data were also used to assemble and annotate genetically distinct Coxiella-like endosymbiont genomes for each tick species. The results indicated that each of the endosymbionts exhibited genome reductions. The Nanopore Q20 + library kit and flow cell were used to sequence the > 80% AT-rich mitochondrial DNA of both tick species. The sequencing generated accurate mitochondrial genomes, encountering imperfect base calling only in homopolymer regions exceeding 10 bases.

CONCLUSION: This study presents an alternative approach for smaller laboratories with limited budgets to enter the field and participate in genomics without capital intensive investments, allowing for capacity building in a field normally exclusively accessible through collaboration and large funding opportunities.

RevDate: 2025-07-02

Rajammal K, M Chinnadurai (2025)

Dynamic load balancing in cloud computing using predictive graph networks and adaptive neural scheduling.

Scientific reports, 15(1):22181 pii:10.1038/s41598-025-97494-2.

Load balancing is one of the significant challenges in cloud environments due to the heterogeneity, dynamic nature of resource states and workloads. The traditional load balancing procedures struggle to adapt the real-time variations which leads to inefficient resource utilization and increased response times. To overcome these issues, a novel approach is presented in this research work utilizing Spiking Neural Networks (SNNs) for adaptive decision-making and Temporal Graph Neural Networks (TGNNs) for dynamic resource state modeling. The proposed SNN model identifies the short-term workload fluctuations and long-term trends whereas TGNN represents the cloud environment as a dynamic graph to predict future resource availability. Additionally, reinforcement learning is incorporated in the proposed work to optimize SNN decisions based on feedback from the TGNN's state predictions. Experimental evaluations of the proposed model with diverse workload scenarios demonstrate significant improvements in terms of throughput, energy efficiency, make span and response time. Additionally, comparative analyses with existing optimization algorithms exhibit the proposed model ability in managing the loads in cloud computing. The results exhibit the 20% higher throughput, reduced makespan by 35%, minimized response time by 40%, and lowered energy consumption by 30-40% of the proposed model compared to the existing methods.

RevDate: 2025-07-02

Cui J, Shi L, A Alkhayyat (2025)

Enhanced security for IoT cloud environments using EfficientNet and enhanced football team training algorithm.

Scientific reports, 15(1):20764.

The growing implementation of Internet of Things (IoT) technology has resulted in a significant increase in the number of connected devices, thereby exposing IoT-cloud environments to a range of cyber threats. As the number of IoT devices continues to grow, the potential attack surface also enlarges, complicating the task of securing these systems. This paper introduces an innovative approach to intrusion detection that integrates EfficientNet with a newly refined metaheuristic known as the Enhanced Football Team Training Algorithm (EFTTA). The proposed EfficientNet/EFTTA model aims to identify anomalies and intrusions in IoT-cloud environments with enhanced accuracy and efficiency. The effectiveness of this model is measured using a standard dataset and is compared against some other methods during performance metrics. The results indicate that the proposed method surpasses existing techniques, demonstrating improved accuracy over 98.56% for NSL-KDD and 99.1% for BoT-IoT in controlled experiments for the protection of IoT-cloud infrastructures.

RevDate: 2025-07-02
CmpDate: 2025-07-02

Bhattacharya P, Mukherjee A, Bhushan B, et al (2025)

A secured remote patient monitoring framework for IoMT ecosystems.

Scientific reports, 15(1):22882.

Recent advancement in the Internet of Medical Things (IoMT) allows patients to set up smart sensors and medical devices to connect to remote healthcare setups. However, existing remote patient monitoring solutions predominantly rely on persistent connectivity and centralized cloud processing, resulting in high latency and energy consumption, particularly in environments with intermittent network availability. There is a need for real-time IoMT computing closer to the dew, with secured and privacy-enabled access to healthcare data. To address this, we propose the DeW-IoMT framework, which includes a dew layer in the roof-fog-cloud systems. Notably, our approach introduces a novel roof computing layer that acts as an intermediary gateway between the dew and fog layers, enhancing data security and reducing communication latency. The proposed architecture provides critical services during disconnected operations and minimizes computational requirements for the fog-cloud system. We measure heart rate using the pulse sensor, where the dew layer sets up conditions for remote patient monitoring with low overheads. We experimentally analyze the proposed scheme's response time, energy dissipation, and bandwidth and present a simulation analysis of the fog layer through the iFogSim software. Our results at dew demonstrate a reduction in response time by 74.61%, a decrease in energy consumption by 38.78%, and a 33.56% reduction in task data compared to traditional cloud-centric models. Our findings validate the framework viability in scalable IoMT setups.

RevDate: 2025-07-02

Sun Y, Zhang Y, Hao J, et al (2025)

Agricultural greenhouses datasets of 2010, 2016, and 2022 in China.

Scientific data, 12(1):1107.

China has built the world's largest area of agricultural greenhouse to meet the requirements of climate change and dietary structure changes. Accurate and timely access to information on agricultural greenhouse space is crucial for effectively managing and improving the quality of agricultural production. However, high-quality, high-resolution data on Chinese agricultural greenhouses are still lacking due to difficulties in identification and an insufficient number of representative training data. This study aimed to propose a method for identifying agricultural greenhouse spectral and texture information based on key growth stages using the Google Earth Engine (GEE) cloud platform, Landsat 7 remote sensing images, and combined field surveys and visual interpretation to collect a large number of samples. This method used a random forest classifier to extract spatial information from remote sensing data to create classification datasets of Chinese agricultural greenhouses in 2010, 2016, and 2022. The overall accuracy reached 97%, with a kappa coefficient of 0.82. This dataset may help researchers and decision-makers further develop research and management in facility agriculture.

RevDate: 2025-07-01

Nyakuri JP, Nkundineza C, Gatera O, et al (2025)

AI and IoT-powered edge device optimized for crop pest and disease detection.

Scientific reports, 15(1):22905.

Climate change exacerbates the challenges of maintaining crop health by influencing invasive pest and disease infestations, especially for cereal crops, leading to enormous yield losses. Consequently, innovative solutions are needed to monitor crop health from early development stages through harvesting. While various technologies, such as the Internet of Things (IoT), machine learning (ML), and artificial intelligence (AI), have been used, portable, cost-effective, and energy-efficient solutions suitable for resource-constrained environments such as edge applications in agriculture are needed. This study presents the development of a portable smart IoT device that integrates a lightweight convolutional neural network (CNN), called Tiny-LiteNet, optimized for edge applications with built-in support of model explainability. The system consists of a high-definition camera for real-time plant image acquisition, a Raspberry-Pi 5 integrated with the Tiny-LiteNet model for edge processing, and a GSM/GPRS module for cloud communication. The experimental results demonstrated that Tiny-LiteNet achieved up to 98.6% accuracy, 98.4% F1-score, 98.2% Recall, 80 ms inference time, while maintaining a compact model size of 1.2 MB with 1.48 million parameters, outperforming traditional CNN architectures such as VGGNet-16, Inception, ResNet50, DenseNet121, MobileNetv2, and EfficientNetB0 in terms of efficiency and suitability for edge computing. Additionally, the low power consumption and user-friendly design of this smart device make it a practical tool for farmers, enabling real-time pest and disease detection, promoting sustainable agriculture, and enhancing food security.

RevDate: 2025-07-01
CmpDate: 2025-07-01

Abbasi SF, Ahmad R, Mukherjee T, et al (2025)

A Novel and Secure 3D Colour Medical Image Encryption Technique Using 3D Hyperchaotic Map, S-box and Discrete Wavelet Transform.

Studies in health technology and informatics, 328:268-272.

Over the past two decades, there has been a substantial increase in the use of the Internet of Medical Things (IoMT). In the smart healthcare setting, patients' data can be quickly collected, stored and processed through insecure medium such as the internet or cloud computing. To address this issue, researchers have developed a range of encryption algorithms to protect medical image data, however these remain vulnerable to brute force and differential cryptanalysis attacks by eavesdroppers. In this study, we propose an efficient approach to enhance the security of medical image transmission by transforming the ciphertext image into a visually meaningful image. The proposed algorithm uses a 3D hyperchaotic system to generate three chaotic sequences for permutation and diffusion, followed by the application of a substitution box (S Box) to increase redundancy. Additionally, the proposed study employed discrete wavelet transform (DWT) to transform ciphertext image into a visually meaningful image. This final image is not only secure but also improves its resistance to cyberattacks. The proposed encryption model demonstrates strong security performance, with key metrics including Unified Average Changing Intensity (UACI) of 36.17% and Number of Pixels Change Rate (NPCR) of 99.57%, highlighting its effectiveness in ensuring secure medical image transmission.

RevDate: 2025-07-01
CmpDate: 2025-07-01

Drabo C, S Malo (2025)

Fog-Enabled Modular Deep Learning Platform for Textual Data Mining in Healthcare for Pathology Detection in Burkina Faso.

Studies in health technology and informatics, 328:173-177.

In this paper, we propose an architecture for a deep-learning based medical diagnosis support platform in Burkina Faso. This model is built by merging the diagnosis and treatment guide with models derived from textual data recovered via optical character recognition (OCR) on handwritten prescriptions and data from electronic health records. Through simulation, we compared two architectures adapted to the Burkinabe context - a fog-based architecture and cloud-based architecture - and the validated one is the solution best suited to the organization of the country's health system.

RevDate: 2025-06-30

Brittain JS, Tsui J, Inward R, et al (2025)

GRAPEVNE - Graphical Analytical Pipeline Development Environment for Infectious Diseases.

Wellcome open research, 10:279.

The increase in volume and diversity of relevant data on infectious diseases and their drivers provides opportunities to generate new scientific insights that can support 'real-time' decision-making in public health across outbreak contexts and enhance pandemic preparedness. However, utilising the wide array of clinical, genomic, epidemiological, and spatial data collected globally is difficult due to differences in data preprocessing, data science capacity, and access to hardware and cloud resources. To facilitate large-scale and routine analyses of infectious disease data at the local level (i.e. without sharing data across borders), we developed GRAPEVNE (Graphical Analytical Pipeline Development Environment), a platform enabling the construction of modular pipelines designed for complex and repetitive data analysis workflows through an intuitive graphical interface. Built on the Snakemake workflow management system, GRAPEVNE streamlines the creation, execution, and sharing of analytical pipelines. Its modular approach already supports a diverse range of scientific applications, including genomic analysis, epidemiological modeling, and large-scale data processing. Each module in GRAPEVNE is a self-contained Snakemake workflow, complete with configurations, scripts, and metadata, enabling interoperability. The platform's open-source nature ensures ongoing community-driven development and scalability. GRAPEVNE empowers researchers and public health institutions by simplifying complex analytical workflows, fostering data-driven discovery, and enhancing reproducibility in computational research. Its user-driven ecosystem encourages continuous innovation in biomedical and epidemiological research but is applicable beyond that. Key use-cases include automated phylogenetic analysis of viral sequences, real-time outbreak monitoring, forecasting, and epidemiological data processing. For instance, our dengue virus pipeline demonstrates end-to-end automation from sequence retrieval to phylogeographic inference, leveraging established bioinformatics tools which can be deployed to any geographical context. For more details, see documentation at: https://grapevne.readthedocs.io.

RevDate: 2025-06-29

Smith SD, Velásquez-Zapata V, RP Wise (2025)

NGPINT V3: A containerized orchestration Python software for discovery of next-generation protein-protein interactions.

Bioinformatics (Oxford, England) pii:8172516 [Epub ahead of print].

SUMMARY: Batch yeast two-hybrid (Y2H) assays, leveraged with next-generation sequencing (NGS), have afforded successful innovations for the analysis of protein-protein interactions (PPIs). NGPINT is a Conda-based software designed to process the millions of raw sequencing reads resulting from yeast two hybrid-next generation interaction screens (Y2H-NGIS). Over time, increasing compatibility and dependency issues have prevented clean NGPINT installation and operation. A system-wide update was essential to continue effective use with its companion software, Y2H-SCORES. We present NGPINT V3, a containerized implementation built with both Singularity and Docker, allowing accessibility across virtually any operating system and computing environment.

This update includes streamlined dependencies and container images hosted on Sylabs (https://cloud.sylabs.io/library/schuyler/ngpint/ngpint) and Dockerhub (https://hub.docker.com/r/schuylerds/ngpint), facilitating easier adoption and integration into high-throughput and cloud-computing workflows. Full instructions and software can be also found in the GitHub repository https://github.com/Wiselab2/NGPINT_V3 and Zenodo https://doi.org/10.5281/zenodo.15256036.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

RevDate: 2025-06-27
CmpDate: 2025-06-27

Xiuqing W, Pirasteh S, Husain HJ, et al (2025)

Leveraging machine learning for monitoring afforestation in mining areas: evaluating Tata Steel's restoration efforts in Noamundi, India.

Environmental monitoring and assessment, 197(7):816.

Mining activities have long been associated with significant environmental impacts, including deforestation, habitat degradation, and biodiversity loss, necessitating targeted strategies like afforestation to mitigate ecological damage. Tata Steel's afforestation initiative near its Noamundi iron ore mining site in Jharkhand, India, spanning 165.5 hectares with over 1.1 million saplings planted, is a critical case study for evaluating such restoration efforts. However, assessing the success of these initiatives requires robust, scalable methods to monitor land use changes over time, a challenge compounded by the need for accurate, cost-effective tools to validate ecological recovery and support environmental governance frameworks. This study introduces a novel approach by integrating multiple machine learning (ML) algorithms, classification and regression tree (CART), random forest, minimum distance, gradient tree boost, and Naive Bayes, with multi-temporal, multi-resolution satellite imagery (Landsat, Sentinel-2A, PlanetScope) on Google Earth Engine (GEE) to analyze land use dynamics in 1987, 2016, and 2022. In a novel application to such contexts, high-resolution PlanetScope data (3 m) and drone imagery were leveraged to validate classification accuracy using an 80:20 training-testing data split. The comparison of ML methods across varying spatial resolutions and temporal scales provides a methodological advancement for monitoring afforestation in mining landscapes, emphasizing reproducibility and precision. Results identified CART and Naive Bayes classifier classifiers as the most accurate (83% accuracy with PlanetScope 2022 data), effectively mapping afforestation progress and land use changes. These findings highlight the utility of ML-driven remote sensing in offering spatially explicit, cost-effective monitoring of restoration initiatives, directly supporting Environmental, Social, and Governance (ESG) reporting by enhancing transparency in ecological management.

RevDate: 2025-06-26

Badshah A, Banjar A, Habibullah S, et al (2025)

Social big data management through collaborative mobile, regional, and cloud computing.

PeerJ. Computer science, 11:e2689.

The crowd of smart devices surrounds us all the time. These devices popularize social media platforms (SMP), connecting billions of users. The enhanced functionalities of smart devices generate big data that overutilizes the mainstream network, degrading performance and increasing the overall cost, compromising time-sensitive services. Research indicates that about 75% of connections come from local areas, and their workload does not need to be migrated to remote servers in real-time. Collaboration among mobile edge computing (MEC), regional computing (RC), and cloud computing (CC) can effectively fill these gaps. Therefore, we propose a collaborative structure of mobile, regional, and cloud computing to address the issues arising from social big data (SBD). In this model, it may be easily accessed from the nearest device or server rather than downloading a file from the cloud server. Furthermore, instead of transferring each file to the cloud servers during peak hours, they are initially stored on a regional level and subsequently uploaded to the cloud servers during off-peak hours. The outcomes affirm that this approach significantly reduces the impact of substantial SBD on the performance of mainstream and social network platforms, specifically in terms of delay, response time, and cost.

RevDate: 2025-06-26

Zeng M, Mohamad Hashim MS, Ayob MN, et al (2025)

Intersection collision prediction and prevention based on vehicle-to-vehicle (V2V) and cloud computing communication.

PeerJ. Computer science, 11:e2846.

In modern transportation systems, the management of traffic safety has become increasingly critical as both the number and complexity of vehicles continue to rise. These systems frequently encounter multiple challenges. Consequently, the effective assessment and management of collision risks in various scenarios within transportation systems are paramount to ensuring traffic safety and enhancing road utilization efficiency. In this paper, we tackle the issue of intelligent traffic collision prediction and propose a vehicle collision risk prediction model based on vehicle-to-vehicle (V2V) communication and the graph attention network (GAT). Initially, the framework gathers vehicle trajectory, speed, acceleration, and relative position information via V2V communication technology to construct a graph representation of the traffic environment. Subsequently, the GAT model extracts interaction features between vehicles and optimizes the vehicle driving strategy through deep reinforcement learning (DRL), thereby augmenting the model's decision-making capabilities. Experimental results demonstrate that the framework achieves over 80% collision recognition accuracy concerning true warning rate on both public and real-world datasets. The metrics for false detection are thoroughly analyzed, revealing the efficacy and robustness of the proposed framework. This method introduces a novel technological approach to collision prediction in intelligent transportation systems and holds significant implications for enhancing traffic safety and decision-making efficiency.

RevDate: 2025-06-26

S S, JP P M (2025)

A novel dilated weighted recurrent neural network (RNN)-based smart contract for secure sharing of big data in Ethereum blockchain using hybrid encryption schemes.

PeerJ. Computer science, 11:e2930.

BACKGROUND: With the enhanced data amount being created, it is significant to various organizations and their processing, and managing big data becomes a significant challenge for the managers of the data. The development of inexpensive and new computing systems and cloud computing sectors gave qualified industries to gather and retrieve the data very precisely however securely delivering data across the network with fewer overheads is a demanding work. In the decentralized framework, the big data sharing puts a burden on the internal nodes among the receiver and sender and also creates the congestion in network. The internal nodes that exist to redirect information may have inadequate buffer ability to momentarily take the information and again deliver it to the upcoming nodes that may create the occasional fault in the transmission of data and defeat frequently. Hence, the next node selection to deliver the data is tiresome work, thereby resulting in an enhancement in the total receiving period to allocate the information.

METHODS: Blockchain is the primary distributed device with its own approach to trust. It constructs a reliable framework for decentralized control via multi-node data repetition. Blockchain is involved in offering a transparency to the application of transmission. A simultaneous multi-threading framework confirms quick data channeling to various network receivers in a very short time. Therefore, an advanced method to securely store and transfer the big data in a timely manner is developed in this work. A deep learning-based smart contract is initially designed. The dilated weighted recurrent neural network (DW-RNN) is used to design the smart contract for the Ethereum blockchain. With the aid of the DW-RNN model, the authentication of the user is verified before accessing the data in the Ethereum blockchain. If the authentication of the user is verified, then the smart contracts are assigned to the authorized user. The model uses elliptic Curve ElGamal cryptography (EC-EC), which is a combination of elliptic curve cryptography (ECC) and ElGamal encryption for better security, to make sure that big data transfers on the Ethereum blockchain are safe. The modified Al-Biruni earth radius search optimization (MBERSO) algorithm is used to make the best keys for this EC-EC encryption scheme. This algorithm manages keys efficiently and securely, which improves data security during blockchain operations.

RESULTS: The processes of encryption facilitate the secure transmission of big data over the Ethereum blockchain. Experimental analysis is carried out to prove the efficacy and security offered by the suggested model in transferring big data over blockchain via smart contracts.

RevDate: 2025-06-26

Salih S, Abdelmaboud A, Husain O, et al (2025)

IoT in urban development: insight into smart city applications, case studies, challenges, and future prospects.

PeerJ. Computer science, 11:e2816.

With the integration of Internet of Things (IoT) technology, smart cities possess the capability to advance their public transportation modalities, address prevalent traffic congestion challenges, refine infrastructure, and optimize communication frameworks, thereby augmenting their progression towards heightened urbanization. Through the integration of sensors, cell phones, artificial intelligence (AI), data analytics, and cloud computing, smart cities worldwide are evolving to be more efficient, productive, and responsive to their residents' needs. While the promise of smart cities has been marked over the past decade, notable challenges, especially in the realm of security, threaten their optimal realization. This research provides a comprehensive survey on IoT in smart cities. It focuses on the IoT-based smart city components. Moreover, it provides explanation for integrating different technologies with IoT for smart cities such as AI, sensing technologies, and networking technologies. Additionally, this study provides several case studies for smart cities. In addition, this study investigates the challenges of adopting IoT in smart cities and provides prevention methods for each challenge. Moreover, this study provides future directions for the upcoming researchers. It serves as a foundational guide for stakeholders and emphasizes the pressing need for a balanced integration of innovation and safety in the smart city landscape.

RevDate: 2025-06-26

S N, S D (2025)

Temporal fusion transformer-based strategy for efficient multi-cloud content replication.

PeerJ. Computer science, 11:e2713.

In cloud computing, ensuring the high availability and reliability of data is dominant for efficient content delivery. Content replication across multiple clouds has emerged as a solution to achieve the above. However, managing optimal replication while considering dynamic changes in data popularity and cloud resource availability remains a formidable challenge. In order to address these challenges, this article employs TFT-based Dynamic Data Replication Strategy (TD2RS), leveraging the Temporal Fusion Transformer (TFT), a deep learning temporal forecasting model. This proposed system collects historical data on content popularity and resource availability from multiple cloud sources, which are then used as input to TFT. Then TFT is used to capture temporal patterns and forecasts future data demands. An intelligent replication is performed to optimize content replication across multiple cloud environments based on these forecasts. The framework's performance was validated through extensive experiments using synthetic time-series data simulating with varied cloud resource characteristics. Some of the findings include that the proposed TFT approach improves the availability of data by 20% when compared to traditional replication techniques and also cuts down the latency level by 15%. These outcomes indicate that the TFT-based replication strategy targets to improve content delivery efficiency in the dynamic cloud computing environment, thus providing effective solution to dynamically address the availability, reliability, and performance challenges.

RevDate: 2025-06-26

Ravula V, M Ramaiah (2025)

Enhancing phishing detection with dynamic optimization and character-level deep learning in cloud environments.

PeerJ. Computer science, 11:e2640.

As cloud computing becomes increasingly prevalent, the detection and prevention of phishing URL attacks are essential, particularly in the Internet of Vehicles (IoV) environment, to maintain service reliability. In such a scenario, an attacker could send misleading phishing links, potentially compromising the system's functionality or, at worst, leading to a complete shutdown. To address these emerging threats, this study introduces a novel Dynamic Arithmetic Optimization Algorithm with Deep Learning-Driven Phishing URL Classification (DAOA-DLPC) model for cloud-enabled IoV infrastructure. The candidate's research utilizes character-level embeddings instead of word embeddings, as the former can capture intricate URL patterns more effectively. These embeddings are integrated with a deep learning model, the Multi-Head Attention and Bidirectional Gated Recurrent Units (MHA-BiGRU). To improve precision, hyperparameter tuning has been done using DAOA. The proposed method offers a feasible solution for identifying the phishing URLs, and the method achieves computational efficiency through the attention mechanism and dynamic hyperparameter optimization. The need for this work comes from the observation that the traditional machine learning approaches are not effective in dynamic environments like phishing threat landscapes in a dynamic environment such as the one of phishing threats. The presented DLPC approach is capable of learning new forms of phishing attacks in real time and reduce false positives. The experimental results show that the proposed DAOA-DLPC model outperforms the other models with an accuracy of 98.85%, recall of 98.49%, and F1-score of 98.38% and can effectively detect safe and phishing URLs in dynamic environments. These results imply that the proposed model is useful in distinguishing between safe and unsafe URLs than the conventional models.

RevDate: 2025-06-26

R A, M G (2025)

Improved salp swarm algorithm based optimization of mobile task offloading.

PeerJ. Computer science, 11:e2818.

BACKGROUND: The realization of computation-intensive applications such as real-time video processing, virtual/augmented reality, and face recognition becomes possible for mobile devices with the latest advances in communication technologies. This application requires complex computation for better user experience and real-time decision-making. However, the Internet of Things (IoT) and mobile devices have computational power and limited energy. Executing these computational-intensive tasks on edge devices may result in high energy consumption or high computation latency. In recent times, mobile edge computing (MEC) has been used and modernized for offloading this complex task. In MEC, IoT devices transmit their tasks to edge servers, which consecutively carry out faster computation.

METHODS: However, several IoT devices and edge servers put an upper limit on executing concurrent tasks. Furthermore, implementing a smaller size task (1 KB) over an edge server leads to improved energy consumption. Thus, there is a need to have an optimum range for task offloading so that the energy consumption and response time will be minimal. The evolutionary algorithm is the best for resolving the multiobjective task. Energy, memory, and delay reduction together with the detection of the offloading task is the multiobjective to achieve. Therefore, this study presents an improved salp swarm algorithm-based Mobile Application Offloading Algorithm (ISSA-MAOA) technique for MEC.

RESULTS: This technique harnesses the optimization capabilities of the improved salp swarm algorithm (ISSA) to intelligently allocate computing tasks between mobile devices and the cloud, aiming to concurrently minimize energy consumption, and memory usage, and reduce task completion delays. Through the proposed ISSA-MAOA, the study endeavors to contribute to the enhancement of mobile cloud computing (MCC) frameworks, providing a more efficient and sustainable solution for offloading tasks in mobile applications. The results of this research contribute to better resource management, improved user interactions, and enhanced efficiency in MCC environments.

RevDate: 2025-06-26

Ibrahim K, Sajid A, Ullah I, et al (2025)

Fuzzy inference rule based task offloading model (FI-RBTOM) for edge computing.

PeerJ. Computer science, 11:e2657.

The key objective of edge computing is to reduce delays and provide consumers with high-quality services. However, there are certain challenges, such as high user mobility and the dynamic environments created by IoT devices. Additionally, the limitations of constrained device resources impede effective task completion. The challenge of task offloading plays a crucial role as one of the key challenges for edge computing, which is addressed in this research. An efficient rule-based task-offloading model (FI-RBTOM) is proposed in this context. The key decision of the proposed model is to choose either the task to be offloaded over an edge server or the cloud server or it can be processed over a local node. The four important input parameters are bandwidth, CPU utilization, task length, and task size. The proposed (FI-RBTOM), simulation is carried out using MATLAB (fuzzy logic) tool with 75% training and 25% testing with an overall error rate of 0.39875 is achieved.

RevDate: 2025-06-26

Sang Y, Guo Y, Wang B, et al (2025)

Diversified caching algorithm with cooperation between edge servers.

PeerJ. Computer science, 11:e2824.

Edge computing makes up for the high latency of the central cloud network by deploying server resources in close proximity to users. The storage and other resources configured by edge servers are limited, and a reasonable cache replacement strategy is conducive to improving the cache hit ratio of edge services, thereby reducing service latency and enhancing service quality. The spatiotemporal correlation of user service request distribution brings opportunities and challenges to edge service caching. The collaboration between edge servers is often ignored in the existing research work for caching decisions, which can easily lead to a low edge cache hit rate, thereby reducing the efficiency of edge resource use and service quality. Therefore, this article proposes a diversified caching method to ensure the diversity of edge cache services, utilizing inter-server collaboration to enhance the cache hit rate. After the service request reaches the server, if it misses, the proposed algorithm will judge whether the neighbor node can provide services through the cache information of the neighbor node, and then the server and the neighbor node jointly decide how to cache the service. At the same time, the performance of the proposed diversified caching method is evaluated through a large number of simulation experiments, and the experimental results show that the proposed method can improve the cache hit rate by 27.01-37.43%, reduce the average service delay by 25.57-30.68%, and with the change of the scale of the edge computing platform, the proposed method can maintain good performance.

RevDate: 2018-12-02
CmpDate: 2017-12-25

Long J, MJ Yuan (2017)

A novel clinical decision support algorithm for constructing complete medication histories.

Computer methods and programs in biomedicine, 145:127-133.

A patient's complete medication history is a crucial element for physicians to develop a full understanding of the patient's medical conditions and treatment options. However, due to the fragmented nature of medical data, this process can be very time-consuming and often impossible for physicians to construct a complete medication history for complex patients. In this paper, we describe an accurate, computationally efficient and scalable algorithm to construct a medication history timeline. The algorithm is developed and validated based on 1 million random prescription records from a large national prescription data aggregator. Our evaluation shows that the algorithm can be scaled horizontally on-demand, making it suitable for future delivery in a cloud-computing environment. We also propose that this cloud-based medication history computation algorithm could be integrated into Electronic Medical Records, enabling informed clinical decision-making at the point of care.

RevDate: 2025-06-25

Tran-Van NY, KH Le (2025)

A multimodal skin lesion classification through cross-attention fusion and collaborative edge computing.

Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society, 124:102588 pii:S0895-6111(25)00097-7 [Epub ahead of print].

Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.

RevDate: 2025-06-24
CmpDate: 2025-06-24

Nalina V, Prabhu D, Sahayarayan JJ, et al (2025)

Advancements in AI for Computational Biology and Bioinformatics: A Comprehensive Review.

Methods in molecular biology (Clifton, N.J.), 2952:87-105.

The field of computational biology and bioinformatics has seen remarkable progress in recent years, driven largely by advancements in artificial intelligence (AI) technologies. This review synthesizes the latest developments in AI methodologies and their applications in addressing key challenges within the field of computational biology and bioinformatics. This review begins by outlining fundamental concepts in AI relevant to computational biology, including machine learning algorithms such as neural networks, support vector machines, and decision trees. It then explores how these algorithms have been adapted and optimized for specific tasks in bioinformatics, such as sequence analysis, protein structure prediction, and drug discovery. AI techniques can be integrated with big data analytics, cloud computing, and high-performance computing to handle the vast amounts of biological data generated by modern experimental techniques. The chapter discusses the role of AI in processing and interpreting various types of biological data, including genomic sequences, protein-protein interactions, and gene expression profiles. This chapter highlights recent breakthroughs in AI-driven precision medicine, personalized genomics, and systems biology, showcasing how AI algorithms are revolutionizing our understanding of complex biological systems and driving innovations in healthcare and biotechnology. Additionally, it addresses emerging challenges and future directions in the field, such as the ethical implications of AI in healthcare, the need for robust validation and reproducibility of AI models, and the importance of interdisciplinary collaboration between computer scientists, biologists, and clinicians. In conclusion, this comprehensive review provides insights into the transformative potential of AI in computational biology and bioinformatics, offering a roadmap for future research and development in this rapidly evolving field.

RevDate: 2025-06-24
CmpDate: 2025-06-24

Wira SS, Tan CK, Wong WP, et al (2025)

Cloud-native simulation framework for gossip protocol: Modeling and analyzing network dynamics.

PloS one, 20(6):e0325817 pii:PONE-D-24-43101.

This research paper explores the implementation of gossip protocols in cloud native framework through network modeling and simulation analysis. Gossip protocol is known for their decentralized and fault-tolerant nature. Simulating gossip protocols with conventional tools may face limitations in flexibility and scalability, complicating analysis, especially for larger or more diverse networks. In this paper, gossip protocols are tested within the context of cloud native computing, which leverages its scalability, flexibility, and observability. The study aims to assess the performance and feasibility of gossip protocols within cloud-native settings through a simulated environment. The paper delves into the theoretical foundation of gossip protocol, highlights the core components of cloud native computing, and explains the methodology employed in the simulation. A detailed guide has been provided on utilizing cloud-native frameworks to simulate gossip protocols across varied network environments. The simulation analysis provides insights into gossip protocols' behavior in distributed cloud-native systems, evaluating aspects of scalability, reliability, and observability. This investigation contributes to understanding the practical implications and potential applications of gossip protocol within modern cloud-native architectures, which can also apply to conventional network infrastructure.

RevDate: 2025-06-20

Maiyza AI, Hassan HA, Sheta WM, et al (2025)

VTGAN based proactive VM consolidation in cloud data centers using value and trend approaches.

Scientific reports, 15(1):20133 pii:10.1038/s41598-025-04757-z.

Reducing energy consumption and optimizing resource usage are essential goals for researchers and cloud providers managing large cloud data centers. Recent advancements have demonstrated the effectiveness of virtual machine consolidation and live migrations as viable solutions. However, many existing strategies are based on immediate workload fluctuations to detect host overload or underload and trigger migration processes. This approach can lead to frequent and unnecessary VM migrations, resulting in energy inefficiency, performance degradation, and service-level agreement (SLA) breaches. Moreover, traditional time series and machine learning models often struggle to accurately predict the dynamic nature of cloud workloads. This paper presents a consolidation strategy based on predicting resource utilization to identify overloaded hosts using novel hybrid value trend generative adversarial network (VTGAN) models. These models not only predict future workloads but also forecast workload trends (i.e., the upward or downward direction of the workload). Trend classification can simplify the decision-making process in resource management approaches. We perform simulations using real PlanetLab workloads on Cloudsim to assess the effectiveness of the proposed VTGAN approaches, based on value and trend, compared to the baseline algorithms. The experimental findings demonstrate that the VTGAN (Up current and predicted trends) approach significantly reduces SLA violations and the number of VM migrations by 79% and 56%, respectively, compared to THR-MMT-PBFD. Additionally, incorporating VTGAN into the VM placement algorithm to disregard hosts predicted to become overloaded further improves performance. After excluding these predicted overloaded servers from the placement process, SLA violations and the number of VM migrations are reduced by 84% and 76%, respectively, compared to THR-MMT-PBFD.

RevDate: 2025-06-19

Kumar J, Saxena D, Gupta K, et al (2025)

A Comprehensively Adaptive Architectural Optimization-Ingrained Quantum Neural Network Model for Cloud Workloads Prediction.

IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].

Accurate workload prediction and advanced resource reservation are indispensably crucial for managing dynamic cloud services. Traditional neural networks and deep learning models frequently encounter challenges with diverse, high-dimensional workloads, especially during sudden resource demand changes, leading to inefficiencies. This issue arises from their limited optimization during training, relying only on parametric (interconnection weights) adjustments using conventional algorithms. To address this issue, this work proposes a novel comprehensively adaptive architectural optimization-based variable quantum neural network (CA-QNN), which combines the efficiency of quantum computing with complete structural and qubit vector parametric learning. The model converts workload data into qubits, processed through qubit neurons with controlled not-gated activation functions for intuitive pattern recognition. In addition, a comprehensive architecture optimization algorithm for networks is introduced to facilitate the learning and propagation of the structure and parametric values in variable-sized quantum neural networks (VQNNs). This algorithm incorporates quantum adaptive modulation (QAM) and size-adaptive recombination during the training process. The performance of the CA-QNN model is thoroughly investigated against seven state-of-the-art methods across four benchmark datasets of heterogeneous cloud workloads. The proposed model demonstrates superior prediction accuracy, reducing prediction errors by up to 93.40% and 91.27% compared to existing deep learning and QNN-based approaches.

RevDate: 2025-06-19

Wu Z, Zhu M, Huang Z, et al (2025)

Graphon-Based Visual Abstraction for Large Multi-Layer Networks.

IEEE transactions on visualization and computer graphics, PP: [Epub ahead of print].

Graph visualization techniques provide a foundational framework for offering comprehensive overviews and insights into cloud computing systems, facilitating efficient management and ensuring their availability and reliability. Despite the enhanced computational and storage capabilities of larger-scale cloud computing architectures, they introduce significant challenges to traditional graph-based visualization due to issues of hierarchical heterogeneity, scalability, and data incompleteness. This paper proposes a novel abstraction approach to visualize large multi-layer networks. Our method leverages graphons, a probabilistic representation of network layers, to encompass three core steps: an inner-layer summary to identify stable and volatile substructures, an inter-layer mixup for aligning heterogeneous network layers, and a context-aware multi-layer joint sampling technique aimed at reducing network scale while retaining essential topological characteristics. By abstracting complex network data into manageable weighted graphs, with each graph depicting a distinct network layer, our approach renders these intricate systems accessible on standard computing hardware. We validate our methodology through case studies, quantitative experiments and expert evaluations, demonstrating its effectiveness in managing large multi-layer networks, as well as its applicability to broader network types such as transportation and social networks.

RevDate: 2025-06-16

Sina EM, Pena J, Zafar S, et al (2025)

Automated Machine Learning Classification of Optical Coherence Tomography Images of Retinal Conditions Using Google Cloud Vertex AI.

Retina (Philadelphia, Pa.) pii:00006982-990000000-01081 [Epub ahead of print].

PURPOSE: Automated machine learning (AutoML) is an artificial intelligence (AI) tool that streamlines image recognition model development. This study evaluates the diagnostic performance of Google VertexAI AutoML in differentiating age-related macular degeneration (AMD), diabetic macular edema (DME), epiretinal membrane (ERM), retinal vein occlusion (RVO), and healthy controls using optical coherence tomography (OCT) images.

METHODS: A publicly available, validated OCT dataset of 1965 de-identified images from 759 patients was used. Images were labeled and uploaded to VertexAI. A single-label classification model was trained, validated, and tested using an 80%-10%-10% split. Diagnostic metrics included area under the precision-recall curve (AUPRC), sensitivity, specificity, and positive and negative predictive value (PPV, NPV). A sub-analysis evaluated neovascular versus non-neovascular AMD.

RESULTS: The AutoML model achieved high accuracy (AUPRC = 0.991), with sensitivity, specificity, and PPV of 95.9%, 96.9%, and 95.9%, respectively. AMD classification performed best (AUPRC = 0.999, precision = 98.4%, recall = 99.2%). ERM (AUPRC = 0.978, precision = 92.9%, recall = 86.7%) and DME (AUPRC = 0.895, precision = 81.3%, recall = 86.7%) followed. RVO recall was 80% despite 100% precision. Neovascular AMD outperformed non-neovascular AMD (AUPRC = 0.963 vs. 0.915).

CONCLUSION: Our AutoML model accurately classifies OCT images of retinal conditions, demonstrating performance comparable or superior to traditional ML methods. Its user-friendly design supports scalable AI-driven clinical integration.

RevDate: 2025-06-18
CmpDate: 2025-06-16

Oliullah K, Whaiduzzaman M, Mahi MJN, et al (2025)

A machine learning based authentication and intrusion detection scheme for IoT users anonymity preservation in fog environment.

PloS one, 20(6):e0323954.

Authentication is a critical challenge in fog computing security, especially as fog servers provide services to many IoT users. The conventional authentication process often requires disclosing sensitive personal information, such as usernames, emails, mobile numbers, and passwords that end users are reluctant to share with intermediary services (i.e., Fog servers). With the rapid growth of IoT networks, existing authentication methods often fail to balance low computational overhead with strong security, leaving systems vulnerable to various attacks, including unauthorized access and data interception. Additionally, traditional intrusion detection methods are not well-suited for the distinct characteristics of IoT devices, resulting in a low accuracy in applying existing anomaly detection methods. In this paper, we incorporate a two-step authentication process, starting with anonymous authentication using a secret ID with Elliptic Curve Cryptography (ECC), followed by an intrusion detection algorithm for users flagged as suspicious activity. The scheme allows users to register with a Cloud Service Provider (CSP) using encrypted credentials. The CSP responds with a secret number reserved in the Fog node for the IoT user. To access the services provided by the Fog Service Provider (FSP), IoT users must submit a secret ID. Furthermore, we introduce a staked ensemble learning approach for intrusion detection that achieves 99.86% accuracy, 99.89% precision, 99.96% recall, and a 99.91% F1-score in detecting anomalous instances, with a support count of 50,376. This approach is applied when users fail to provide a correct secret ID. Our proposed scheme utilizes several hash functions through symmetric encryption and decryption techniques to ensure secure end-to-end communication.

RevDate: 2025-06-21

Koning E, Subedi A, R Krishnakumar (2025)

Poplar: a phylogenomics pipeline.

Bioinformatics advances, 5(1):vbaf104.

MOTIVATION: Generating phylogenomic trees from the genomic data is essential in understanding biological systems. Each step of this complex process has received extensive attention and has been significantly streamlined over the years. Given the public availability of data, obtaining genomes for a wide selection of species is straightforward. However, analyzing that data to generate a phylogenomic tree is a multistep process with legitimate scientific and technical challenges, often requiring a significant input from a domain-area scientist.

RESULTS: We present Poplar, a new, streamlined computational pipeline, to address the computational logistical issues that arise when constructing the phylogenomic trees. It provides a framework that runs state-of-the-art software for essential steps in the phylogenomic pipeline, beginning from a genome with or without an annotation, and resulting in a species tree. Running Poplar requires no external databases. In the execution, it enables parallelism for execution for clusters and cloud computing. The trees generated by Poplar match closely with state-of-the-art published trees. The usage and performance of Poplar is far simpler and quicker than manually running a phylogenomic pipeline.

Freely available on GitHub at https://github.com/sandialabs/poplar. Implemented using Python and supported on Linux.

RevDate: 2025-06-14

Langarizadeh M, M Hajebrahimi (2025)

Medical Big Data Storage in Precision Medicine: A Systematic Review.

Journal of biomedical physics & engineering, 15(3):205-220.

BACKGROUND: The characteristics of medical data in Precision Medicine (PM), the challenges related to their storage and retrieval, and the effective facilities to address these challenges are importantly considered in implementing PM. For this purpose, a secured and scalable infrastructure for various data integration and storage is needed.

OBJECTIVE: This study aimed to determine the characteristics of PM data and recognize the challenges and solutions related to appropriate infrastructure for data storage and its related issues.

MATERIAL AND METHODS: In this systematic study, coherent research was conducted on Web of Science, Scopus, PubMed, Embase, and Google Scholar from 2015 to 2023. A total of 16 articles were selected and evaluated based on the inclusion and exclusion criteria and the central search theme of the study.

RESULTS: A total of 1,961 studies were identified from designated databases, 16 articles met the eligibility criteria and were classified into five main sections PM data and its major characteristics based on the volume, variety and velocity (3Vs) of medical big data, data quality issues, appropriate infrastructure for PM data storage, cloud computing and PM infrastructure, and security and privacy. The variety of PM data is categorized into four major categories.

CONCLUSION: A suitable infrastructure for precision medicine should be capable of integrating and storing heterogeneous data from diverse departments and sources. By leveraging big data management experiences from other industries and aligning their characteristics with those in precision medicine, it is possible to facilitate the implementation of precision medicine while avoiding duplication.

RevDate: 2025-06-13

Jourdain S, O'Leary P, Schroeder W, et al (2025)

Trame: Platform Ubiquitous, Scalable Integration Framework for Visual Analytics.

IEEE computer graphics and applications, 45(2):126-134.

Trame is an open-source, Python-based, scalable integration framework for visual analytics. It is the culmination of decades of work-by a large and active community-beginning with the creation of VTK, the growth of ParaView as a premier high-performance, client-server computing system, and more recently the creation of web tools, such as VTK.js and VTK.wasm. As an integration environment, trame relies on open-source standards and tools that can be easily combined into effective computing solutions. We have long recognized that impactful analytics tools must be ubiquitous-meaning they run on all major computing platforms-and integrate/interoperate easily with external packages, such as data systems and processing tools, application UI frameworks, and 2-D/3-D graphical libraries. In this article, we present the architecture and use of trame for applications ranging from simple dashboards to complex workflow-based applications. We also describe examples that readily incorporate external tools and run without coding changes on desktop, mobile, cloud, client-server, and interactive computing notebooks, such as Jupyter.

RevDate: 2025-06-13

Li H, Wang J, H Liu (2025)

Empowering Precision Medicine for Rare Diseases through Cloud Infrastructure Refactoring.

AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science, 2025:300-311.

Rare diseases affect approximately 1 in 11 Americans, yet their diagnosis remains challenging due to limited clinical evidence, low awareness, and lack of definitive treatments. Our project aims to accelerate rare disease diagnosis by developing a comprehensive informatics framework leveraging data mining, semantic web technologies, deep learning, and graph-based embedding techniques. However, our on-premises computational infrastructure faces significant challenges in scalability, maintenance, and collaboration. This study focuses on developing and evaluating a cloud-based computing infrastructure to address these challenges. By migrating to a scalable, secure, and collaborative cloud environment, we aim to enhance data integration, support advanced predictive modeling for differential diagnoses, and facilitate widespread dissemination of research findings to stakeholders, the research community, and the public and also proposed a facilitated through a reliable, standardized workflow designed to ensure minimal disruption and maintain data integrity for existing research project.

RevDate: 2025-06-11
CmpDate: 2025-06-09

Das B, LS Heath (2025)

Variant evolution graph: Can we infer how SARS-CoV-2 variants are evolving?.

PloS one, 20(6):e0323970.

The SARS-CoV-2 virus has undergone extensive mutations over time, resulting in considerable genetic diversity among circulating strains. This diversity directly affects important viral characteristics, such as transmissibility and disease severity. During a viral outbreak, the rapid mutation rate produces a large cloud of variants, referred to as a viral quasispecies. However, many variants are lost due to the bottleneck of transmission and survival. Advances in next-generation sequencing have enabled continuous and cost-effective monitoring of viral genomes, but constructing reliable phylogenetic trees from the vast collection of sequences in GISAID (the Global Initiative on Sharing All Influenza Data) presents significant challenges. We introduce a novel graph-based framework inspired by quasispecies theory, the Variant Evolution Graph (VEG), to model viral evolution. Unlike traditional phylogenetic trees, VEG accommodates multiple ancestors for each variant and maps all possible evolutionary pathways. The strongly connected subgraphs in the VEG reveal critical evolutionary patterns, including recombination events, mutation hotspots, and intra-host viral evolution, providing deeper insights into viral adaptation and spread. We also derive the Disease Transmission Network (DTN) from the VEG, which supports the inference of transmission pathways and super-spreaders among hosts. We have applied our method to genomic data sets from five arbitrarily selected countries - Somalia, Bhutan, Hungary, Iran, and Nepal. Our study compares three methods for computing mutational distances to build the VEG, sourmash, pyani, and edit distance, with the phylogenetic approach using Maximum Likelihood (ML). Among these, ML is the most computationally intensive, requiring multiple sequence alignment and probabilistic inference, making it the slowest. In contrast, sourmash is the fastest, followed by the edit distance approach, while pyani takes more time due to its BLAST-based computations. This comparison highlights the computational efficiency of VEG, making it a scalable alternative for analyzing large viral data sets.

RevDate: 2025-06-11

Aldosari B (2025)

Cybersecurity in Healthcare: New Threat to Patient Safety.

Cureus, 17(5):e83614.

The rapid integration of technology into healthcare systems has brought significant improvements in patient care and operational efficiency, but it has also introduced new cybersecurity challenges. This manuscript explores the evolving landscape of cybersecurity risks in healthcare, with a focus on their potential impact on patient safety and the strategies to mitigate these threats. The rise of interconnected systems, electronic health records (EHRs), and Internet of things (IoT) devices has made safeguarding patient data and healthcare processes increasingly complex. Notable cyber incidents, such as the Anthem Blue Cross breach and the WannaCry ransomware attack, highlight the real-world consequences of these vulnerabilities. The review also examines emerging technologies like AI, cloud computing, telehealth, and wearables, considering their potential benefits and security risks. Best practices for improving healthcare cybersecurity are discussed, including regulatory compliance, risk assessment, data encryption, employee training, and incident response planning. Ultimately, the manuscript emphasizes the ethical responsibility of healthcare organizations to prioritize cybersecurity, ensuring a balance between innovation and security to protect patient data, uphold regulatory standards, and maintain the integrity of healthcare services.

RevDate: 2025-06-12

Palmeira LS, Quintanilha-Peixoto G, da Costa AM, et al (2025)

FUNIN - a fungal glycoside hydrolases 32 enzyme database for developing optimized inulinases.

International journal of biological macromolecules, 318(Pt 2):145050 pii:S0141-8130(25)05603-X [Epub ahead of print].

The enzymatic hydrolysis of inulin, a fructose-rich polysaccharide from plants like Agave spp., is crucial for bioethanol production. Fungal glycoside hydrolase family 32 (GH32) enzymes, especially inulinases, are central to this process, yet no dedicated database existed. To fill this gap, we developed FUNIN, a cloud-based, non-relational database cataloging and analyzing fungal GH32 enzymes relevant to inulin hydrolysis. Built with MongoDB and hosted on AWS, FUNIN integrates enzyme sequences, taxonomic data, physicochemical properties, and annotations from UniProt and InterPro via an automated ELT pipeline. Tools like CLEAN and ProtParam were used for EC number prediction and sequence characterization. The database currently includes 3420 GH32 enzymes, with strong representation from Ascomycota (91.2 %) and key genera such as Fusarium, Aspergillus, and Penicillium. Exo-inulinases (43.9 %), endo-inulinases (33.4 %), and invertases (21.6 %) dominate the dataset. These enzymes share conserved domains (PF00251-PF08244), acidic pI values, and moderate hydrophobicity. A network similarity analysis revealed structural conservation among exo-inulinases. FUNIN includes an automated monthly update via InterPro API, ensuring current data. Publicly accessible at http://funindb.lbqc.org, FUNIN enables rapid data retrieval and supports the development of optimized enzyme cocktails for Agave-based bioethanol production.

RevDate: 2025-06-08
CmpDate: 2025-06-06

Khan AA, Laghari AA, Alroobaea R, et al (2025)

A lightweight scalable hybrid authentication framework for Internet of Medical Things (IoMT) using blockchain hyperledger consortium network with edge computing.

Scientific reports, 15(1):19856.

The Internet of Things (IoMT) has revolutionized the global landscape by enabling the hierarchy of interconnectivity between medical devices, sensors, and healthcare applications. Significant limitations in terms of scalability, privacy, and security are associated with this connection. This study presents a scalable, lightweight hybrid authentication system that integrates blockchain and edge computing within a Hyperledger Consortium network to address such real-time problems, particularly the use of Hyperledger Indy. For secure authentication, Hyperledger ensures a permissioned, decentralized, and impenetrable environment, while edge computing lowers latency by processing data closer to IoMT devices. The proposed framework balances security and computing performance by utilizing a hybrid cryptographic technique, like NuCypher Threshold Proxy Re-Encryption. Applicational activities are now appropriate for IoMT devices with limited resources thanks to this integration. By facilitating cooperation between numerous stakeholders with restricted access, the consortium network improves scalability and data governance. Comparing the proposed framework to the state-of-the-art techniques, experimental evaluation shows that it reduces latency by 2.93% and increases authentication efficiency by 98.33%. Therefore, in contrast to current solutions, guarantee data integrity and transparency between patients, consultants, and hospitals. The development of dependable, scalable, and secure IoMT applications is facilitated by this work, enabling next-generation medical applications.

RevDate: 2025-06-04

Zafar I, Unar A, Khan NU, et al (2025)

Molecular biology in the exabyte era: Taming the data deluge for biological revelation and clinical transformation.

Computational biology and chemistry, 119:108535 pii:S1476-9271(25)00195-1 [Epub ahead of print].

The explosive growth in next-generation high-throughput technologies has driven modern molecular biology into the exabyte era, producing an unparalleled volume of biological data across genomics, proteomics, metabolomics, and biomedical imaging. Although this massive expansion of data can power future biological discoveries and precision medicine, it presents considerable challenges, including computational bottlenecks, fragmented data landscapes, and ethical issues related to privacy and accessibility. We highlight novel contributions, such as the application of blockchain technologies to ensure data integrity and traceability, a relatively underexplored solution in this context. We describe how artificial intelligence (AI), machine learning (ML), and cloud computing fundamentally reshape and provide scalable solutions for these challenges by enabling near real-time pattern recognition, predictive modelling, and integrated data analysis. In particular, the use of federated learning models allows privacy-preserving collaboration across institutions. We emphasise the importance of open science, FAIR principles (Findable, Accessible, Interoperable, and Reusable), and blockchain-based audit trails to enhance global collaboration, reproducibility, and data security. By processing multi-omics datasets in integrated formats, we can enhance our understanding of disease mechanisms, facilitate biomarker discovery, and develop AI-assisted, personalised therapeutics. Addressing these technical and ethical demands requires robust governance frameworks that protect sensitive data without hindering innovation. This paper underscores a shift toward more secure, transparent, and collaborative biomedical research, marking a decisive step toward clinical transformation.

RevDate: 2025-06-09
CmpDate: 2025-06-04

Alzahrani N (2025)

Security importance of edge-IoT ecosystem: An ECC-based authentication scheme.

PloS one, 20(6):e0322131.

Despite the many outstanding benefits of cloud computing, such as flexibility, accessibility, efficiency, and cost savings, it still suffers from potential data loss, security concerns, limited control, and availability issues. The experts introduced the edge computing paradigm to perform better than cloud computing for the mentioned issues and challenges because it is directly connected to the Internet-of-Things (IoT), sensors, and wearables in a decentralized manner to distribute processing power closer to the data source, rather than relying on a central cloud server to handle all computations; this allows for faster data processing and reduced latency by processing data locally at the 'edge' of the network where it's generated. However, due to the resource-constrained nature of IoT, sensors, or wearable devices, the edge computing paradigm endured numerous data breaches due to sensitive data proximity, physical tampering vulnerabilities, and privacy concerns related to user-near data collection, and challenges in managing security across a large number of edge devices. Existing authentication schemes didn't fulfill the security needs of the edge computing paradigm; they either have design flaws, are susceptible to various known threats-such as impersonation, insider attacks, denial of service (DoS), and replay attacks-or experience inadequate performance due to reliance on resource-intensive cryptographic algorithms, like modular exponentiations. Given the pressing need for robust security mechanisms in such a dynamic and vulnerable edge-IoT ecosystem, this article proposes an ECC-based robust authentication scheme for such a resource-constrained IoT to address all known vulnerabilities and counter each identified threat. The proof of correctness of the proposed protocol has been scrutinized through a well-known and widely used Real-Or-Random (RoR) model, ProVerif validation, and attacks' discussion, demonstrating the thoroughness of the proposed protocol. The performance metrics have been measured by considering computational time complexity, communication cost, and storage overheads, further reinforcing the confidence in the proposed solution. The comparative analysis results demonstrated that the proposed ECC-based authentication protocol is 90.05% better in terms of computation cost, 62.41% communication cost, and consumes 67.42% less energy compared to state-of-the-art schemes. Therefore, the proposed protocol can be recommended for practical implementation in the real-world edge-IoT ecosystem.

RevDate: 2025-06-06

Alharbe NR (2025)

Fuzzy clustering based scheduling algorithm for minimizing the tasks completion time in cloud computing environment.

Scientific reports, 15(1):19505 pii:10.1038/s41598-025-02654-z.

This paper explores the complexity of project planning in a cloud computing environment and recognizes the challenges associated with distributed resources, heterogeneity, and dynamic changes in workloads. This research introduces a fresh approach to planning cloud resources more effectively by utilizing fuzzy waterfall techniques. The goal is to make better use of resources while cutting down on scheduling costs. By categorizing resources based on their characteristics, this method aims to lower search costs during project planning and speed up the resource selection process. The paper presents the Budget and Time Constrained Heterogeneous Early Completion (BDHEFT) technique, which is an enhanced version of HEFT tailored to meet specific user requirements, such as budget constraints and execution timelines. With its focus on fuzzy resource allocation that considers task composition and priority, BDHEFT streamlines the project schedule, ultimately reducing both execution time and costs. The algorithm design and mathematical modeling discussed in this study lay a strong foundation for boosting task scheduling efficiency in cloud computing environments, which provides a broad perspective to improve the overall system performance and meet user quality requirements.

RevDate: 2025-06-03

Shi X, S Geng (2025)

Double-edged sword? Heterogeneous effects of digital technology on environmental regulation-driven green transformation.

Journal of environmental management, 389:125960 pii:S0301-4797(25)01936-X [Epub ahead of print].

In the context of China's dual carbon goal, enterprises' green transformation is a key path to advancing the nation's high-quality economic development. A majority of existing studies have regarded digital technology as a homogeneous variable, and the heterogeneous impact of various technologies have not been sufficiently explored. Therefore, based on Chinese enterprises' data from 2012 to 2022, this study systematically examines the influence of environmental regulations (ETS) on enterprises' green transformation (GT) from the perspective of digital empowerment by employing difference-in-differences and threshold regression models. The findings reveal that digital transformation (DT) enhances the influence of environmental regulation by strengthening cost and innovation compensation effects. Further analysis indicates that different digital technologies have significant double-edged sword characteristics, wherein artificial intelligence negatively regulates both mechanisms, reflecting a lack of technological adaptability; cloud computing significantly enhances the positive impact of environmental regulation, reflecting its technological maturity; and big data technologies only positively regulate the innovation compensation effect, reflecting the enterprises' application preference. In addition, the combination of digital technologies does not create synergies, indicating firms' challenges in terms of absorptive capacity and organizational change. This study expands the theoretical research on environmental regulation and green transformation and provides a valuable reference for the government to develop targeted policies and enterprises to optimize the path of green transformation.

RevDate: 2025-06-05

Jiang W, Liu C, Liu W, et al (2025)

Advancements in Intelligent Sensing Technologies for Food Safety Detection.

Research (Washington, D.C.), 8:0713.

As a critical global public health concern, food safety has prompted substantial strategic advancements in detection technologies to safeguard human health. Integrated intelligent sensing systems, incorporating advanced information perception and computational intelligence, have emerged as rapid, user-friendly, and cost-effective solutions through the synergy of multisource sensors and smart computing. This review systematically examines the fundamental principles of intelligent sensing technologies, including optical, electrochemical, machine olfaction, and machine gustatory systems, along with their practical applications in detecting microbial, chemical, and physical hazards in food products. The review analyzes the current state and future development trends of intelligent perception from 3 core aspects: sensing technology, signal processing, and modeling algorithms. Driven by technologies such as machine learning and blockchain, intelligent sensing technology can ensure food safety throughout all stages of food processing, storage, and transportation, and provide support for the traceability and authenticity identification of food. It also presents current challenges and development trends associated with intelligent sensing technologies in food safety, including novel sensing materials, edge-cloud computing frameworks, and the co-design of energy-efficient algorithms with hardware architectures. Overall, by addressing current limitations and harnessing emerging innovations, intelligent sensing technologies are poised to establish a more resilient, transparent, and proactive framework for safeguarding food safety across global supply chains.

RevDate: 2025-06-05

Hu T, Shen P, Zhang Y, et al (2025)

OpenPheno: an open-access, user-friendly, and smartphone-based software platform for instant plant phenotyping.

Plant methods, 21(1):76.

BACKGROUND: Plant phenotyping has become increasingly important for advancing plant science, agriculture, and biotechnology. Classic manual methods are labor-intensive and time-consuming, while existing computational tools often require advanced coding skills, high-performance hardware, or PC-based environments, making them inaccessible to non-experts, to resource-constrained users, and to field technicians.

RESULTS: To respond to these challenges, we introduce OpenPheno, an open-access, user-friendly, and smartphone-based platform encapsulated within a WeChat Mini-Program for instant plant phenotyping. The platform is designed for ease of use, enabling users to phenotype plant traits quickly and efficiently with only a smartphone at hand. We currently instantiate the use of the platform with tools such as SeedPheno, WheatHeadPheno, LeafAnglePheno, SpikeletPheno, CanopyPheno, TomatoPheno, and CornPheno; each offering specific functionalities such as seed size and count analysis, wheat head detection, leaf angle measurement, spikelet counting, canopy structure analysis, and tomato fruit measurement. In particular, OpenPheno allows developers to contribute new algorithmic tools, further expanding its capabilities to continuously facilitate the plant phenotyping community.

CONCLUSIONS: By leveraging cloud computing and a widely accessible interface, OpenPheno democratizes plant phenotyping, making advanced tools available to a broader audience, including plant scientists, breeders, and even amateurs. It can function as a role in AI-driven breeding by providing the necessary data for genotype-phenotype analysis, thereby accelerating breeding programs. Its integration with smartphones also positions OpenPheno as a powerful tool in the growing field of mobile-based agricultural technologies, paving the way for more efficient, scalable, and accessible agricultural research and breeding.

RevDate: 2025-06-05

Singh AR, Sujatha MS, Kadu AD, et al (2025)

A deep learning and IoT-driven framework for real-time adaptive resource allocation and grid optimization in smart energy systems.

Scientific reports, 15(1):19309.

The rapid evolution of smart grids, driven by rising global energy demand and renewable energy integration, calls for intelligent, adaptive, and energy-efficient resource allocation strategies. Traditional energy management methods, based on static models or heuristic algorithms, often fail to handle real-time grid dynamics, leading to suboptimal energy distribution, high operational costs, and significant energy wastage. To overcome these challenges, this paper presents ORA-DL (Optimized Resource Allocation using Deep Learning) an advanced framework that integrates deep learning, Internet of Things (IoT)-based sensing, and real-time adaptive control to optimize smart grid energy management. ORA-DL employs deep neural networks, reinforcement learning, and multi-agent decision-making to accurately predict energy demand, allocate resources efficiently, and enhance grid stability. The framework leverages both historical and real-time data for proactive power flow management, while IoT-enabled sensors ensure continuous monitoring and low-latency response through edge and cloud computing infrastructure. Experimental results validate the effectiveness of ORA-DL, achieving 93.38% energy demand prediction accuracy, improving grid stability to 96.25%, and reducing energy wastage to 12.96%. Furthermore, ORA-DL enhances resource distribution efficiency by 15.22% and reduces operational costs by 22.96%, significantly outperforming conventional techniques. These performance gains are driven by real-time analytics, predictive modelling, and adaptive resource modulation. By combining AI-driven decision-making, IoT sensing, and adaptive learning, ORA-DL establishes a scalable, resilient, and sustainable energy management solution. The framework also provides a foundation for future advancements, including integration with edge computing, cybersecurity measures, and reinforcement learning enhancements, marking a significant step forward in smart grid optimization.

RevDate: 2025-06-11
CmpDate: 2025-06-01

Czech E, Tyler W, White T, et al (2025)

Analysis-ready VCF at Biobank scale using Zarr.

GigaScience, 14:.

BACKGROUND: Variant Call Format (VCF) is the standard file format for interchanging genetic variation data and associated quality control metrics. The usual row-wise encoding of the VCF data model (either as text or packed binary) emphasizes efficient retrieval of all data for a given variant, but accessing data on a field or sample basis is inefficient. The Biobank-scale datasets currently available consist of hundreds of thousands of whole genomes and hundreds of terabytes of compressed VCF. Row-wise data storage is fundamentally unsuitable and a more scalable approach is needed.

RESULTS: Zarr is a format for storing multidimensional data that is widely used across the sciences, and is ideally suited to massively parallel processing. We present the VCF Zarr specification, an encoding of the VCF data model using Zarr, along with fundamental software infrastructure for efficient and reliable conversion at scale. We show how this format is far more efficient than standard VCF-based approaches, and competitive with specialized methods for storing genotype data in terms of compression ratios and single-threaded calculation performance. We present case studies on subsets of 3 large human datasets (Genomics England: $n$=78,195; Our Future Health: $n$=651,050; All of Us: $n$=245,394) along with whole genome datasets for Norway Spruce ($n$=1,063) and SARS-CoV-2 ($n$=4,484,157). We demonstrate the potential for VCF Zarr to enable a new generation of high-performance and cost-effective applications via illustrative examples using cloud computing and GPUs.

CONCLUSIONS: Large row-encoded VCF files are a major bottleneck for current research, and storing and processing these files incurs a substantial cost. The VCF Zarr specification, building on widely used, open-source technologies, has the potential to greatly reduce these costs, and may enable a diverse ecosystem of next-generation tools for analysing genetic variation data directly from cloud-based object stores, while maintaining compatibility with existing file-oriented workflows.

RevDate: 2025-06-03
CmpDate: 2025-06-01

Prasad VK, Dansana D, Patro SGK, et al (2025)

IoT-based bed and ventilator management system during the COVID-19 pandemic.

Scientific reports, 15(1):19163.

The COVID-19 outbreak put a significant pressure on limited healthcare resources. The specific number of people that may be affected in the near future is difficult to determine. We can therefore deduce that the corona virus pandemic's healthcare requirements surpassed available capacity. The Internet of Things (IoT) has emerged an crucial concept for the advancement of information and communication technology. Since IoT devices are used in various medical fields like real-time tracking, patient data management, and healthcare management. Patients can be tracked using a variety of tiny-powered and lightweight wireless sensor nodes which use the body sensor network (BSN) technology, one of the key technologies of IoT advances in healthcare. This gives clinicians and patients more options in contemporary healthcare management. This study report focuses on the conditions for vacating beds available for COVID-19 patients. The patient's health condition is recognized and categorised as positive or negative in terms of the Coronavirus disease (COVID-19) using IoT sensors. The proposed model presented in this paper uses the ARIMA model and Transformer model to train a dataset with the aim of providing enhanced prediction. The physical implementation of these models is expected to accelerate the process of patient admission and the provision of emergency services, as the predicted patient influx data will be made available to the healthcare system in advance. This predictive capability of the proposed model contributes to the efficient management of healthcare resources. The research findings indicate that the proposed models demonstrate high accuracy, as evident by its low mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE).

RevDate: 2025-05-30

Biba B, BA O'Shea (2025)

Exploring Public Sentiments of Psychedelics Versus Other Substances: A Reddit-Based Natural Language Processing Study.

Journal of psychoactive drugs [Epub ahead of print].

New methods that capture the public's perception of controversial topics may be valuable. This study investigates public sentiments toward psychedelics and other substances through analyzes of Reddit discussions, using Google's cloud-based Natural Language Processing (NLP) infrastructure. Our findings indicate that illicit substances such as heroin and methamphetamine are associated with highly negative general sentiments, whereas psychedelics like Psilocybin, LSD, and Ayahuasca generally evoke neutral to slightly positive sentiments. This study underscores the effectiveness and cost efficiency of NLP and machine learning models in understanding the public's perception of sensitive topics. The findings indicate that online public sentiment toward psychedelics may be growing in acceptance of their therapeutic potential. However, limitations include potential selection bias from the Reddit sample and challenges in accurately interpreting nuanced language using NLP. Future research should aim to diversify data sources and enhance NLP models to capture the full spectrum of public sentiment toward psychedelics. Our findings support the importance of ongoing research and public education to inform policy decisions and therapeutic applications of psychedelics.

RevDate: 2025-05-30

Michaelson D, Schreiber D, Heule MJH, et al (2025)

Producing Proofs of Unsatisfiability with Distributed Clause-Sharing SAT Solvers.

Journal of automated reasoning, 69(2):12.

Distributed clause-sharing SAT solvers can solve challenging problems hundreds of times faster than sequential SAT solvers by sharing derived information among multiple sequential solvers. Unlike sequential solvers, however, distributed solvers have not been able to produce proofs of unsatisfiability in a scalable manner, which limits their use in critical applications. In this work, we present a method to produce unsatisfiability proofs for distributed SAT solvers by combining the partial proofs produced by each sequential solver into a single, linear proof. We first describe a simple sequential algorithm and then present a fully distributed algorithm for proof composition, which is substantially more scalable and general than prior works. Our empirical evaluation with over 1500 solver threads shows that our distributed approach allows proof composition and checking within around 3 × its own (highly competitive) solving time.

RevDate: 2025-06-01

Wang N, Li Y, Li Y, et al (2025)

Fault-tolerant and mobility-aware loading via Markov chain in mobile cloud computing.

Scientific reports, 15(1):18844.

With the development of better communication networks and other related technologies, the IoT has become an integral part of modern IT. However, mobile devices' limited memory, computing power, and battery life pose significant challenges to their widespread use. As an alternate, mobile cloud computing (MCC) makes good use of cloud resources to boost mobile devices' storage and processing capabilities. This involves moving some program logic to the cloud, which improves performance and saves power. Techniques for mobility-aware offloading are necessary because device movement affects connection quality and network access. Depending on less-than-ideal mobility models, insufficient fault tolerance, inaccurate offloading, and poor task scheduling are just a few of the limitations that current mobility-aware offloading methods often face. Using fault-tolerant approaches and user mobility patterns defined by a Markov chain, this research introduces a novel decision-making framework for mobility-aware offloading. The evaluation findings show that compared to current approaches, the suggested method achieves execution speeds up to 77.35% faster and energy use down to 67.14%.

RevDate: 2025-05-31
CmpDate: 2025-05-29

De Oliveira El-Warrak L, C Miceli de Farias (2025)

TWINVAX: conceptual model of a digital twin for immunisation services in primary health care.

Frontiers in public health, 13:1568123.

INTRODUCTION: This paper presents a proposal for the modelling and reference architecture of a digital twin for immunisation services in primary health care centres. The system leverages Industry 4.0 concepts and technologies, such as the Internet of Things (IoT), machine learning, and cloud computing, to improve vaccination management and monitoring.

METHODS: The modelling was conducted using the Unified Modelling Language (UML) to define workflows and processes such as temperature monitoring of storage equipment and tracking of vaccination status. The proposed reference architecture follows the ISO 23247 standard and is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain.

RESULTS: The system enables the storage, monitoring, and visualisation of data related to the immunisation room, specifically concerning the temperature control of ice-lined refrigerators (ILRs) and thermal boxes. An analytic module has been developed to monitor vaccination coverage, correlating individual vaccination statuses with the official vaccination calendar.

DISCUSSION: The proposed digital twin improves vaccine temperature management, reduces vaccine dose wastage, monitors the population's vaccination status, and supports the planning of more effective immunisation actions. The article also discusses the feasibility, potential benefits, and future impacts of deploying this technology within immunisation services.

RevDate: 2025-05-31

Aleisa MA (2025)

Enhancing Security in CPS Industry 5.0 using Lightweight MobileNetV3 with Adaptive Optimization Technique.

Scientific reports, 15(1):18677.

Advanced Cyber-Physical Systems (CPS) that facilitate seamless communication between humans, machines, and objects are revolutionizing industrial automation as part of Industry 5.0, which is being driven by technologies such as IIoT, cloud computing, and artificial intelligence. In addition to providing flexible, individualized production processes, this growth brings with it fresh cybersecurity risks including Distributed Denial of Service (DDoS) attacks. This research suggests a deep learning-based approach designed to enhance security in CPS to address these issues. The system's primary goal is to identify and stop advanced cyberattacks. The strategy guarantees strong protection for industrial processes in a networked, intelligent environment. This study offers a sophisticated paradigm for improving Cyber-Physical Systems (CPS) security in Industry 5.0 by combining effective data preprocessing, thin edge computing, and strong encryption methods. The method starts with preprocessing the IoT23 dataset, which includes utilizing Gaussian filters to reduce noise, Mean Imputation to handle missing values, and Min-Max normalization to data scaling. The model uses flow-based, time-based, statistical, and deep feature extraction using ResNet-101 for feature extraction. Computational efficiency is maximized through the implementation of MobileNetV3, a thin convolutional neural network optimized for mobile and edge devices. The accuracy of the model is further improved by applying a Chaotic Tent-based Puma Optimization (CTPOA) technique. Finally, to ensure secure data transfer and protect private data in CPS settings, AES encryption is combined with discretionary access control. This comprehensive framework enables high performance, achieving 99.91% accuracy, and provides strong security for Industry 5.0 applications.

RevDate: 2025-05-31

Alkhalifa AK, Aljebreen M, Alanazi R, et al (2025)

Mitigating malicious denial of wallet attack using attribute reduction with deep learning approach for serverless computing on next generation applications.

Scientific reports, 15(1):18720.

Denial of Wallet (DoW) attacks are one kind of cyberattack whose goal is to develop and expand the financial sources of a group by causing extreme costs in their serverless computing or cloud environments. These threats are chiefly related to serverless structures owing to their features, such as auto-scaling, pay-as-you-go method, cost amplification, and limited control. Serverless computing, Function-as-a-Service (FaaS), is a cloud computing (CC) system that permits developers to construct and run applications without a conventional server substructure. The deep learning (DL) model, a part of the machine learning (ML) technique, has developed as an effectual device in cybersecurity, permitting more effectual recognition of anomalous behaviour and classifying patterns indicative of threats. This study proposes a Mitigating Malicious Denial of Wallet Attack using Attribute Reduction with Deep Learning (MMDoWA-ARDL) approach for serverless computing on next-generation applications. The primary purpose of the MMDoWA-ARDL approach is to propose a novel framework that effectively detects and mitigates malicious attacks in serverless environments using an advanced deep-learning model. Initially, the presented MMDoWA-ARDL model applies data pre-processing using Z-score normalization to transform input data into a valid format. Furthermore, the feature selection process-based cuckoo search optimization (CSO) model efficiently identifies the most impactful attributes related to potential malicious activity. For the DoW attack mitigation process, the bi-directional long short-term memory multi-head self-attention network (BMNet) method is employed. Finally, the hyperparameter tuning is accomplished by implementing the secretary bird optimizer algorithm (SBOA) method to enhance the classification outcomes of the BMNet model. A wide-ranging experimental investigation uses a benchmark dataset to exhibit the superior performance of the proposed MMDoWA-ARDL technique. The comparison study of the MMDoWA-ARDL model portrayed a superior accuracy value of 99.39% over existing techniques.

RevDate: 2025-05-31

Heydari S, QH Mahmoud (2025)

Tiny Machine Learning and On-Device Inference: A Survey of Applications, Challenges, and Future Directions.

Sensors (Basel, Switzerland), 25(10):.

The growth in artificial intelligence and its applications has led to increased data processing and inference requirements. Traditional cloud-based inference solutions are often used but may prove inadequate for applications requiring near-instantaneous response times. This review examines Tiny Machine Learning, also known as TinyML, as an alternative to cloud-based inference. The review focuses on applications where transmission delays make traditional Internet of Things (IoT) approaches impractical, thus necessitating a solution that uses TinyML and on-device inference. This study, which follows the PRISMA guidelines, covers TinyML's use cases for real-world applications by analyzing experimental studies and synthesizing current research on the characteristics of TinyML experiments, such as machine learning techniques and the hardware used for experiments. This review identifies existing gaps in research as well as the means to address these gaps. The review findings suggest that TinyML has a strong record of real-world usability and offers advantages over cloud-based inference, particularly in environments with bandwidth constraints and use cases that require rapid response times. This review discusses the implications of TinyML's experimental performance for future research on TinyML applications.

RevDate: 2025-05-31

Jin W, A Rezaeipanah (2025)

Dynamic task allocation in fog computing using enhanced fuzzy logic approaches.

Scientific reports, 15(1):18513.

Fog computing extends cloud services to the edge of the network, enabling low-latency processing and improved resource utilization, which are crucial for real-time Internet of Things (IoT) applications. However, efficient task allocation remains a significant challenge due to the dynamic and heterogeneous nature of fog environments. Traditional task scheduling methods often fail to manage uncertainty in task requirements and resource availability, leading to suboptimal performance. In this paper, we propose a novel approach, DTA-FLE (Dynamic Task Allocation in Fog computing using a Fuzzy Logic Enhanced approach), which leverages fuzzy logic to handle the inherent uncertainty in task scheduling. Our method dynamically adapts to changing network conditions, optimizing task allocation to improve efficiency, reduce latency, and enhance overall system performance. Unlike conventional approaches, DTA-FLE introduces a novel hierarchical scheduling mechanism that dynamically adapts to real-time network conditions using fuzzy logic, ensuring optimal task allocation and improved system responsiveness. Through simulations using the iFogSim framework, we demonstrate that DTA-FLE outperforms conventional techniques in terms of execution time, resource utilization, and responsiveness, making it particularly suitable for real-time IoT applications within hierarchical fog-cloud architectures.

RevDate: 2025-05-31

Ruambo FA, Masanga EE, Lufyagila B, et al (2025)

Brute-force attack mitigation on remote access services via software-defined perimeter.

Scientific reports, 15(1):18599.

Remote Access Services (RAS)-including protocols such as Remote Desktop Protocol (RDP), Secure Shell (SSH), Virtual Network Computing (VNC), Telnet, File Transfer Protocol (FTP), and Secure File Transfer Protocol (SFTP)-are essential to modern network infrastructures, particularly with the rise of remote work and cloud adoption. However, their exposure significantly increases the risk of brute-force attacks (BFA), where adversaries systematically guess credentials to gain unauthorized access. Traditional defenses like IP blocklisting and multifactor authentication (MFA) often struggle with scalability and adaptability to distributed attacks. This study introduces a zero-trust-aligned Software-Defined Perimeter (SDP) architecture that integrates Single Packet Authorization (SPA) for service cloaking and Connection Tracking (ConnTrack) for real-time session analysis. A Docker-based prototype was developed and tested, demonstrating no successful BFA attempts observed, latency reduction by above 10% across all evaluated RAS protocols, and the system CPU utilization reduction by 48.7% under attack conditions without impacting normal throughput. It also proved effective against connection-oriented attacks, including port scanning and distributed denial of service (DDoS) attacks. The proposed architecture offers a scalable and efficient security framework by embedding proactive defense at the authentication layer. This work advances zero-trust implementations and delivers practical, low-overhead protection for securing RAS against evolving cyber threats.

RevDate: 2025-06-04
CmpDate: 2025-05-26

Marini S, Barquero A, Wadhwani AA, et al (2024)

OCTOPUS: Disk-based, Multiplatform, Mobile-friendly Metagenomics Classifier.

AMIA ... Annual Symposium proceedings. AMIA Symposium, 2024:798-807.

Portable genomic sequencers such as Oxford Nanopore's MinION enable real-time applications in clinical and environmental health. However, there is a bottleneck in the downstream analytics when bioinformatics pipelines are unavailable, e.g., when cloud processing is unreachable due to absence of Internet connection, or only low-end computing devices can be carried on site. Here we present a platform-friendly software for portable metagenomic analysis of Nanopore data, the Oligomer-based Classifier of Taxonomic Operational and Pan-genome Units via Singletons (OCTOPUS). OCTOPUS is written in Java, reimplements several features of the popular Kraken2 and KrakenUniq software, with original components for improving metagenomics classification on incomplete/sampled reference databases, making it ideal for running on smartphones or tablets. OCTOPUS obtains sensitivity and precision comparable to Kraken2, while dramatically decreasing (4- to 16-fold) the false positive rate, and yielding high correlation on real-word data. OCTOPUS is available along with customized databases at https://github.com/DataIntellSystLab/OCTOPUS and https://github.com/Ruiz-HCI-Lab/OctopusMobile.

RevDate: 2025-06-13

Adams MCB, Hudson C, Chen W, et al (2025)

Automated multi-instance REDCap data synchronization for NIH clinical trial networks.

JAMIA open, 8(3):ooaf036.

OBJECTIVES: The main goal is to develop an automated process for connecting Research Electronic Data Capture (REDCap) instances in a clinical trial network to allow for deidentified transfer of research surveys to cloud computing data commons for discovery.

MATERIALS AND METHODS: To automate the process of consolidating data from remote clinical trial sites into 1 dataset at the coordinating/storage site, we developed a Hypertext Preprocessor script that operates in tandem with a server-side scheduling system (eg, Cron) to set up practical data extraction schedules for each remote site.

RESULTS: The REDCap Application Programming Interface (API) Connection provides a novel implementation for automated synchronization between multiple REDCap instances across a distributed clinical trial network, enabling secure and efficient data transfer between study sites and coordination centers. Additionally, the protocol checker allows for automated reporting on conforming to planned data library protocols.

DISCUSSION: Working from a shared and accepted core library of REDCap surveys was critical to the success of this implementation. This model also facilitates Institutional Review Board (IRB) approvals because the coordinating center can designate which surveys and data elements to be transferred. Hence, protected health information can be transformed or withheld depending on the permission given by the IRB at the coordinating center level. For the NIH HEAL clinical trial networks, this unified data collection works toward the goal of creating a deidentified dataset for transfer to a Gen3 data commons.

CONCLUSION: We established several simple and research-relevant tools, REDCAP API Connection and REDCAP Protocol Check, to support the emerging needs of clinical trial networks with increased data harmonization complexity.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )