picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
19 Feb 2025 at 01:41
HITS:
3901
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 19 Feb 2025 at 01:41 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-02-18

Pricope NG, EG Dalton (2025)

Mapping coastal resilience: Precision insights for green infrastructure suitability.

Journal of environmental management, 376:124511 pii:S0301-4797(25)00487-6 [Epub ahead of print].

Addressing the need for effective flood risk mitigation strategies and enhanced urban resilience to climate change, we introduce a cloud-computed Green Infrastructure Suitability Index (GISI) methodology. This approach combines remote sensing and geospatial modeling to create a cloud-computed blend that synthesizes land cover classifications, biophysical variables, and flood exposure data to map suitability for green infrastructure (GI) implementation at both street and landscape levels. The GISI methodology provides a flexible and robust tool for urban planning, capable of accommodating diverse data inputs and adjustments, making it suitable for various geographic contexts. Applied within the Wilmington Urban Area Metropolitan Planning Organization (WMPO) in North Carolina, USA, our findings show that residential parcels, constituting approximately 91% of the total identified suitable areas, are optimally positioned for GI integration. This underscores the potential for embedding GI within developed residential urban landscapes to bolster ecosystem and community resilience. Our analysis indicates that 7.19% of the WMPO area is highly suitable for street-level GI applications, while 1.88% is ideal for landscape GI interventions, offering opportunities to enhance stormwater management and biodiversity at larger and more connected spatial scales. By identifying specific parcels with high suitability for GI, this research provides a comprehensive and transferable, data-driven foundation for local and regional planning efforts. The scalability and adaptability of the proposed modeling approach make it a powerful tool for informing sustainable urban development practices. Future work will focus on more spatially-resolved models of these areas and the exploration of GI's multifaceted benefits at the local level, aiming to guide the deployment of GI projects that align with broader environmental and social objectives.

RevDate: 2025-02-18

Zan T, Jia X, Guo X, et al (2025)

Research on variable-length control chart pattern recognition based on sliding window method and SECNN-BiLSTM.

Scientific reports, 15(1):5921.

Control charts, as essential tools in Statistical Process Control (SPC), are frequently used to analyze whether production processes are under control. Most existing control chart recognition methods target fixed-length data, failing to meet the needs of recognizing variable-length control charts in production. This paper proposes a variable-length control chart recognition method based on Sliding Window Method and SE-attention CNN and Bi-LSTM (SECNN-BiLSTM). A cloud-edge integrated recognition system was developed using wireless digital calipers, embedded devices, and cloud computing. Different length control chart data is transformed from one-dimensional to two-dimensional matrices using a sliding window approach and then fed into a deep learning network combining SE-attention CNN and Bi-LSTM. This network, inspired by residual structures, extracts multiple features to build a control chart recognition model. Simulations, the cloud-edge recognition system, and engineering applications demonstrate that this method efficiently and accurately recognizes variable-length control charts, establishing a foundation for more efficient pattern recognition.

RevDate: 2025-02-18
CmpDate: 2025-02-18

Bathelt F, Lorenz S, Weidner J, et al (2025)

Application of Modular Architectures in the Medical Domain - a Scoping Review.

Journal of medical systems, 49(1):27.

The healthcare sector is notable for its reliance on discrete, self-contained information systems, which are often characterised by the presence of disparate data silos. The growing demands for documentation, quality assurance, and secondary use of medical data for research purposes has underscored the necessity for solutions that are more flexible, straightforward to maintain and interoperable. In this context, modular systems have the potential to act as a catalyst for change, offering the capacity to encapsulate and combine functionalities in an adaptable manner. The objective of this scoping review is to determine the extent to which modular systems are employed in the medical field. The review will provide a detailed overview of the effectiveness of service-oriented or microservice architectures, the challenges that should be addressed during implementation, and the lessons that can be learned from countries with productive use of such modular architectures. The review shows a rise in the use of microservices, indicating a shift towards encapsulated autonomous functions. The implementation should use HL7 FHIR as communication standard, deploy RESTful interfaces and standard protocols for technical data exchange, and apply HIPAA security rule for security purposes. User involvement is essential, as is integrating services into existing workflows. Modular architectures can facilitate flexibility and scalability. However, there are well-documented performance issues associated with microservice architectures, namely a high communication demand. One potential solution to this problem may be to integrate modular architectures into a cloud computing environment, which would require further investigation.

RevDate: 2025-02-18

Kelliher JM, Xu Y, Flynn MC, et al (2024)

Standardized and accessible multi-omics bioinformatics workflows through the NMDC EDGE resource.

Computational and structural biotechnology journal, 23:3575-3583.

Accessible and easy-to-use standardized bioinformatics workflows are necessary to advance microbiome research from observational studies to large-scale, data-driven approaches. Standardized multi-omics data enables comparative studies, data reuse, and applications of machine learning to model biological processes. To advance broad accessibility of standardized multi-omics bioinformatics workflows, the National Microbiome Data Collaborative (NMDC) has developed the Empowering the Development of Genomics Expertise (NMDC EDGE) resource, a user-friendly, open-source web application (https://nmdc-edge.org). Here, we describe the design and main functionality of the NMDC EDGE resource for processing metagenome, metatranscriptome, natural organic matter, and metaproteome data. The architecture relies on three main layers (web application, orchestration, and execution) to ensure flexibility and expansion to future workflows. The orchestration and execution layers leverage best practices in software containers and accommodate high-performance computing and cloud computing services. Further, we have adopted a robust user research process to collect feedback for continuous improvement of the resource. NMDC EDGE provides an accessible interface for researchers to process multi-omics microbiome data using production-quality workflows to facilitate improved data standardization and interoperability.

RevDate: 2025-02-17

Dinpajooh M, Hightower GL, Overstreet RE, et al (2025)

On the stability constants of metal-nitrate complexes in aqueous solutions.

Physical chemistry chemical physics : PCCP [Epub ahead of print].

Stability constants of simple reactions involving addition of the NO3[-] ion to hydrated metal complexes, [M(H2O)x][n+] are calculated with a computational workflow developed using cloud computing resources. The computational workflow performs conformational searches for metal complexes at both low and high levels of theories in conjunction with a continuum solvation model (CSM). The low-level theory is mainly used for the initial conformational searches, which are complemented with high-level density functional theory conformational searches in the CSM framework to determine the coordination chemistry relevant for stability constant calculations. In this regard, the lowest energy conformations are found to obtain the reaction free energies for the addition of one NO3[-] to [M(H2O)x][n+] complexes, where M represents Fe(II), Fe(III), Sr(II), Ce(III), Ce(IV), and U(VI), respectively. Structural analysis of hundreds of optimized geometries at high-level theory reveals that NO3[-] coordinates with Fe(II) and Fe(III) in either a monodentate or bidentate manner. Interestingly, the lowest-energy conformations of Fe(II) metal-nitrate complexes exhibit monodentate or bidentate coordination with a coordination number of 6 while the bidentate seven-coordinated Fe(II) metal-nitrate complexes are approximately 2 kcal mol[-1] higher in energy. Notably, for Fe(III) metal-nitrate complexes, the bidentate seven-coordinated configuration is more stable than the six-coordinated Fe(II) complexes (monodentate or bidentate) by a few thermal energy units. In contrast, Sr(II), Ce(III), Ce(IV), and U(VI) metal ions predominantly coordinate with NO3[-] in a bidentate manner, exhibiting typical coordination numbers of 7, 9, 9, and 5, respectively. Stability constants are accordingly calculated using linear free energy approaches to account for the systematic errors and good agreements are obtained between the calculated stability constants and the available experimental data.

RevDate: 2025-02-17

Thilakarathne NN, Abu Bakar MS, Abas PE, et al (2025)

Internet of things enabled smart agriculture: Current status, latest advancements, challenges and countermeasures.

Heliyon, 11(3):e42136.

It is no wonder that agriculture plays a vital role in the development of some countries when their economies rely on agricultural activities and the production of food for human survival. Owing to the ever-increasing world population, estimated at 7.9 billion in 2022, feeding this number of people has become a concern due to the current rate of agricultural food production subjected to various reasons. The advent of the Internet of Things (IoT) based technologies in the 21st century has led to the reshaping of every industry, including agriculture, and has paved the way for smart agriculture, with the technology used towards automating and controlling most aspects of traditional agriculture. Smart agriculture, interchangeably known as smart farming, utilizes IoT and related enabling technologies such as cloud computing, artificial intelligence, and big data in agriculture and offers the potential to enhance agricultural operations by automating and making intelligent decisions, resulting in increased efficiency and a better yield with minimum waste. Consequently, most governments are spending more money and offering incentives to switch from traditional to smart agriculture. Nonetheless, the COVID-19 global pandemic served as a catalyst for change in the agriculture industry, driving a shift toward greater reliance on technology over traditional labor for agricultural tasks. In this regard, this research aims to synthesize the current knowledge of smart agriculture, highlighting its current status, main components, latest application areas, advanced agricultural practices, hardware and software used, success stores, potential challenges, and countermeasures to them, and future trends, for the growth of the industry as well as a reference to future research.

RevDate: 2025-02-14

Wyman A, Z Zhang (2025)

A Tutorial on the Use of Artificial Intelligence Tools for Facial Emotion Recognition in R.

Multivariate behavioral research [Epub ahead of print].

Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.

RevDate: 2025-02-13

Guturu H, Nichols A, Cantrell LS, et al (2025)

Cloud-Enabled Scalable Analysis of Large Proteomics Cohorts.

Journal of proteome research [Epub ahead of print].

Rapid advances in depth and throughput of untargeted mass-spectrometry-based proteomic technologies enable large-scale cohort proteomic and proteogenomic analyses. As such, the data infrastructure and search engines required to process data must also scale. This challenge is amplified in search engines that rely on library-free match between runs (MBR) search, which enable enhanced depth-per-sample and data completeness. However, to date, no MBR-based search could scale to process cohorts of thousands or more individuals. Here, we present a strategy to deploy search engines in a distributed cloud environment without source code modification, thereby enhancing resource scalability and throughput. Additionally, we present an algorithm, Scalable MBR, that replicates the MBR procedure of popular DIA-NN software for scalability to thousands of samples. We demonstrate that Scalable MBR can search thousands of MS raw files in a few hours compared to days required for the original DIA-NN MBR procedure and demonstrate that the results are almost indistinguishable to those of DIA-NN native MBR. We additionally show that empirical spectra generated by Scalable MBR better approximates DIA-NN native MBR compared to semiempirical alternatives such as ID-RT-IM MBR, preserving user choice to use empirical libraries in large cohort analysis. The method has been tested to scale to over 15,000 injections and is available for use in the Proteograph Analysis Suite.

RevDate: 2025-02-13

Li H, H Chung (2025)

Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI.

Sensors (Basel, Switzerland), 25(3): pii:s25030919.

The rapid development of AI (artificial intelligence), sensor technology, high-speed Internet, and cloud computing has demonstrated the potential of data-driven approaches in structural health monitoring (SHM) within the field of structural engineering. Algorithms based on machine learning (ML) models are capable of discerning intricate structural behavioral patterns from real-time data gathered by sensors, thereby offering solutions to engineering quandaries in structural mechanics and SHM. This study presents an innovative approach based on AI and a fiber-reinforced polymer (FRP) double-helix sensor system for the prediction of forces acting on steel tube members in offshore wind turbine support systems; this enables structural health monitoring of the support system. The steel tube as the transitional member and the FRP double helix-sensor system were initially modeled in three dimensions using ABAQUS finite element software. Subsequently, the data obtained from the finite element analysis (FEA) were inputted into a fully connected neural network (FCNN) model, with the objective of establishing a nonlinear mapping relationship between the inputs (strain) and the outputs (reaction force). In the FCNN model, the impact of the number of input variables on the model's predictive performance is examined through cross-comparison of different combinations and positions of the six sets of input variables. And based on an evaluation of engineering costs and the number of strain sensors, a series of potential combinations of variables are identified for further optimization. Furthermore, the potential variable combinations were optimized using a convolutional neural network (CNN) model, resulting in optimal input variable combinations that achieved the accuracy level of more input variable combinations with fewer sensors. This not only improves the prediction performance of the model but also effectively controls the engineering cost. The model performance was evaluated using several metrics, including R[2], MSE, MAE, and SMAPE. The results demonstrated that the CNN model exhibited notable advantages in terms of fitting accuracy and computational efficiency when confronted with a limited data set. To provide further support for practical applications, an interactive graphical user interface (GUI)-based sensor-coupled mechanical prediction system for steel tubes was developed. This system enables engineers to predict the member forces of steel tubes in real time, thereby enhancing the efficiency and accuracy of SHM for offshore wind turbine support systems.

RevDate: 2025-02-13

Alboqmi R, RF Gamble (2025)

Enhancing Microservice Security Through Vulnerability-Driven Trust in the Service Mesh Architecture.

Sensors (Basel, Switzerland), 25(3): pii:s25030914.

Cloud-native computing enhances the deployment of microservice architecture (MSA) applications by improving scalability and resilience, particularly in Beyond 5G (B5G) environments such as Sixth-Generation (6G) networks. This is achieved through the ability to replace traditional hardware dependencies with software-defined solutions. While service meshes enable secure communication for deployed MSAs, they struggle to identify vulnerabilities inherent to microservices. The reliance on third-party libraries and modules, essential for MSAs, introduces significant supply chain security risks. Implementing a zero-trust approach for MSAs requires robust mechanisms to continuously verify and monitor the software supply chain of deployed microservices. However, existing service mesh solutions lack runtime trust evaluation capabilities for continuous vulnerability assessment of third-party libraries and modules. This paper introduces a mechanism for continuous runtime trust evaluation of microservices, integrating vulnerability assessments within a service mesh to enhance the deployed MSA application. The proposed approach dynamically assigns trust scores to deployed microservices, rewarding secure practices such as timely vulnerability patching. It also enables the sharing of assessment results, enhancing mitigation strategies across the deployed MSA application. The mechanism is evaluated using the Train Ticket MSA, a complex open-source benchmark MSA application deployed with Docker containers, orchestrated using Kubernetes, and integrated with the Istio service mesh. Results demonstrate that the enhanced service mesh effectively supports dynamic trust evaluation based on the vulnerability posture of deployed microservices, significantly improving MSA security and paving the way for future self-adaptive solutions.

RevDate: 2025-02-13
CmpDate: 2025-02-13

Abushark YB, Hassan S, AI Khan (2025)

Optimized Adaboost Support Vector Machine-Based Encryption for Securing IoT-Cloud Healthcare Data.

Sensors (Basel, Switzerland), 25(3): pii:s25030731.

The Internet of Things (IoT) connects various medical devices that enable remote monitoring, which can improve patient outcomes and help healthcare providers deliver precise diagnoses and better service to patients. However, IoT-based healthcare management systems face significant challenges in data security, such as maintaining a triad of confidentiality, integrity, and availability (CIA) and securing data transmission. This paper proposes a novel AdaBoost support vector machine (ASVM) based on the grey wolf optimization and international data encryption algorithm (ASVM-based GWO-IDEA) to secure medical data in an IoT-enabled healthcare system. The primary objective of this work was to prevent possible cyberattacks, unauthorized access, and tampering with the security of such healthcare systems. The proposed scheme encodes the healthcare data before transmitting them, protecting them from unauthorized access and other network vulnerabilities. The scheme was implemented in Python, and its efficiency was evaluated using a Kaggle-based public healthcare dataset. The performance of the model/scheme was evaluated with existing strategies in the context of effective security parameters, such as the confidentiality rate and throughput. When using the suggested methodology, the data transmission process was improved and achieved a high throughput of 97.86%, an improved resource utilization degree of 98.45%, and a high efficiency of 93.45% during data transmission.

RevDate: 2025-02-13

Mahedero Biot F, Fornes-Leal A, Vaño R, et al (2025)

A Novel Orchestrator Architecture for Deploying Virtualized Services in Next-Generation IoT Computing Ecosystems.

Sensors (Basel, Switzerland), 25(3): pii:s25030718.

The Next-Generation IoT integrates diverse technological enablers, allowing the creation of advanced systems with increasingly complex requirements and maximizing the use of available IoT-edge-cloud resources. This paper introduces an orchestrator architecture for dynamic IoT scenarios, inspired by ETSI NFV MANO and Cloud Native principles, where distributed computing nodes often have unfixed and changing networking configurations. Unlike traditional approaches, this architecture also focuses on managing services across massively distributed mobile nodes, as demonstrated in the automotive use case presented. Apart from working as MANO framework, the proposed solution efficiently handles service lifecycle management in large fleets of vehicles without relying on public or static IP addresses for connectivity. Its modular, microservices-based approach ensures adaptability to emerging trends like Edge Native, WebAssembly and RISC-V, positioning it as a forward-looking innovation for IoT ecosystems.

RevDate: 2025-02-13
CmpDate: 2025-02-13

Khan FU, Shah IA, Jan S, et al (2025)

Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review.

Sensors (Basel, Switzerland), 25(3): pii:s25030687.

This systematic literature review analyzes machine learning (ML)-based techniques for resource management in fog computing. Utilizing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, this paper focuses on ML and deep learning (DL) solutions. Resource management in the fog computing domain was thoroughly analyzed by identifying the key factors and constraints. A total of 68 research papers of extended versions were finally selected and included in this study. The findings highlight a strong preference for DL in addressing resource management challenges within a fog computing paradigm, i.e., 66% of the reviewed articles leveraged DL techniques, while 34% utilized ML. Key factors such as latency, energy consumption, task scheduling, and QoS are interconnected and critical for resource management optimization. The analysis reveals that latency, energy consumption, and QoS are the prime factors addressed in the literature on ML-based fog computing resource management. Latency is the most frequently addressed parameter, investigated in 77% of the articles, followed by energy consumption and task scheduling at 44% and 33%, respectively. Furthermore, according to our evaluation, an extensive range of challenges, i.e., computational resource and latency, scalability and management, data availability and quality, and model complexity and interpretability, are addressed by employing 73, 53, 45, and 46 ML/DL techniques, respectively.

RevDate: 2025-02-13

Ogwara NO, Petrova K, Yang MLB, et al (2025)

MINDPRES: A Hybrid Prototype System for Comprehensive Data Protection in the User Layer of the Mobile Cloud.

Sensors (Basel, Switzerland), 25(3): pii:s25030670.

Mobile cloud computing (MCC) is a technological paradigm for providing services to mobile device (MD) users. A compromised MD may cause harm to both its user and to other MCC customers. This study explores the use of machine learning (ML) models and stochastic methods for the protection of Android MDs connected to the mobile cloud. To test the validity and feasibility of the proposed models and methods, the study adopted a proof-of-concept approach and developed a prototype system named MINDPRESS. The static component of MINDPRES assesses the risk of the apps installed on the MD. It uses a device-based ML model for static feature analysis and a cloud-based stochastic risk evaluator. The device-based hybrid component of MINDPRES monitors app behavior in real time. It deploys two ML models and functions as an intrusion detection and prevention system (IDPS). The performance evaluation results of the prototype showed that the accuracy achieved by the methods for static and hybrid risk evaluation compared well with results reported in recent work. Power consumption data indicated that MINDPRES did not create an overload. This study contributes a feasible and scalable framework for building distributed systems for the protection of the data and devices of MCC customers.

RevDate: 2025-02-13

Cabrera VE, Bewley J, Breunig M, et al (2025)

Data Integration and Analytics in the Dairy Industry: Challenges and Pathways Forward.

Animals : an open access journal from MDPI, 15(3): pii:ani15030329.

The dairy industry faces significant challenges in data integration and analysis, which are critical for informed decision-making, operational optimization, and sustainability. Data integration-combining data from diverse sources, such as herd management systems, sensors, and diagnostics-remains difficult due to the lack of standardization, infrastructure barriers, and proprietary concerns. This commentary explores these issues based on insights from a multidisciplinary group of stakeholders, including industry experts, researchers, and practitioners. Key challenges discussed include the absence of a national animal identification system in the US, high IT resource costs, reluctance to share data due to competitive disadvantages, and differences in global data handling practices. Proposed pathways forward include developing comprehensive data integration guidelines, enhancing farmer awareness through training programs, and fostering collaboration across industry, academia, and technology providers. Additional recommendations involve improving data exchange standards, addressing interoperability issues, and leveraging advanced technologies, such as artificial intelligence and cloud computing. Emphasis is placed on localized data integration solutions for farm-level benefits and broader research applications to advance sustainability, traceability, and profitability within the dairy supply chain. These outcomes provide a foundation for achieving streamlined data systems, enabling actionable insights, and fostering innovation in the dairy industry.

RevDate: 2025-02-10

Bhat SN, Jindal GD, GD Nagare (2024)

Development and Validation of Cloud-based Heart Rate Variability Monitor.

Journal of medical physics, 49(4):654-660.

CONTEXT: This article introduces a new cloud-based point-of-care system to monitor heart rate variability (HRV).

AIMS: Medical investigations carried out at dispensaries or hospitals impose substantial physiological and psychological stress (white coat effect), disrupting cardiovascular homeostasis, which can be taken care by point-of-care cloud computing system to facilitate secure patient monitoring.

SETTINGS AND DESIGN: The device employs MAX30102 sensor to collect peripheral pulse signal using photoplethysmography technique. The non-invasive design ensures patient compliance while delivering critical insights into Autonomic Nervous System activity. Preliminary validations indicate the system's potential to enhance clinical outcomes by supporting timely, data-driven therapeutic adjustments based on HRV metrics.

SUBJECTS AND METHODS: This article explores the system's development, functionality, and reliability. System designed is validated with peripheral pulse analyzer (PPA), a research product of electronics division, Bhabha Atomic Research Centre.

STATISTICAL ANALYSIS USED: The output of developed HRV monitor (HRVM) is compared using Pearson's correlation and Mann-Whitney U-test with output of PPA. Peak positions and spectrum values are validated using Pearson's correlation, mean error, standard deviation (SD) of error, and range of error. HRV parameters such as total power, mean, peak amplitude, and power in very low frequency, low frequency, and high frequency bands are validated using Mann-Whitney U-test.

RESULTS: Pearson's correlation for spectrum values has been found to be more than 0.97 in all the subjects. Mean error, SD of error, and range of error are found to be in acceptable range.

CONCLUSIONS: Statistical results validate the new HRVM system against PPA for use in cloud computing and point-of-care testing.

RevDate: 2025-02-08

He C, Zhao Z, Zhang X, et al (2025)

RotInv-PCT: Rotation-Invariant Point Cloud Transformer via feature separation and aggregation.

Neural networks : the official journal of the International Neural Network Society, 185:107223 pii:S0893-6080(25)00102-9 [Epub ahead of print].

The widespread use of point clouds has spurred the rapid development of neural networks for point cloud processing. A crucial property of these networks is maintaining consistent output results under random rotations of the input point cloud, namely, rotation invariance. The dominant approach achieves rotation invariance is to construct local coordinate systems for computing invariant local point cloud coordinates. However, this method neglects the relative pose relationships between local point cloud structures, leading to a decline in network performance. To address this limitation, we propose a novel Rotation-Invariant Point Cloud Transformer (RotInv-PCT). This method extracts the local abstract shape features of the point cloud using Local Reference Frames (LRFs) and explicitly computes the spatial relative pose features between local point clouds, both of which are proven to be rotation-invariant. Furthermore, to capture the long-range pose dependencies between points, we introduce an innovative Feature Aggregation Transformer (FAT) model, which seamlessly fuses the pose features with the shape features to obtain a globally rotation-invariant representation. Moreover, to manage large-scale point clouds, we utilize hierarchical random downsampling to gradually decrease the scale of point clouds, followed by feature aggregation through FAT. To demonstrate the effectiveness of RotInv-PCT, we conducted comparative experiments across various tasks and datasets, including point cloud classification on ScanObjectNN and ModelNet40, part segmentation on ShapeNet, and semantic segmentation on S3DIS and KITTI. Thanks to our provable rotation-invariant features and FAT, our method generally outperforms state-of-the-art networks. In particular, we highlight that RotInv-PCT achieved a 2% improvement in real-world point cloud classification tasks compared to the strongest baseline. Furthermore, in the semantic segmentation task, we improved the performance on the S3DIS dataset by 10% and, for the first time, realized rotation-invariant point cloud semantic segmentation on the KITTI dataset.

RevDate: 2025-02-07

Nantakeeratipat T, Apisaksirikul N, Boonrojsaree B, et al (2024)

Automated machine learning for image-based detection of dental plaque on permanent teeth.

Frontiers in dental medicine, 5:1507705.

INTRODUCTION: To detect dental plaque, manual assessment and plaque-disclosing dyes are commonly used. However, they are time-consuming and prone to human error. This study aims to investigate the feasibility of using Google Cloud's Vertex artificial intelligence (AI) automated machine learning (AutoML) to develop a model for detecting dental plaque levels on permanent teeth using undyed photographic images.

METHODS: Photographic images of both undyed and corresponding erythrosine solution-dyed upper anterior permanent teeth from 100 dental students were captured using a smartphone camera. All photos were cropped to individual tooth images. Dyed images were analyzed to classify plaque levels based on the percentage of dyed surface area: mild (<30%), moderate (30%-60%), and heavy (>60%) categories. These true labels were used as the ground truth for undyed images. Two AutoML models, a three-class model (mild, moderate, heavy plaque) and a two-class model (acceptable vs. unacceptable plaque), were developed using undyed images in Vertex AI environment. Both models were evaluated based on precision, recall, and F1-score.

RESULTS: The three-class model achieved an average precision of 0.907, with the highest precision (0.983) in the heavy plaque category. Misclassifications were more common in the mild and moderate categories. The two-class acceptable-unacceptable model demonstrated improved performance with an average precision of 0.964 and an F1-score of 0.931.

CONCLUSION: This study demonstrated the potential of Vertex AI AutoML for non-invasive detection of dental plaque. While the two-class model showed promise for clinical use, further studies with larger datasets are recommended to enhance model generalization and real-world applicability.

RevDate: 2025-02-05

Saadati S, Sepahvand A, M Razzazi (2025)

Cloud and IoT based smart agent-driven simulation of human gait for detecting muscles disorder.

Heliyon, 11(2):e42119.

Motion disorders affect a significant portion of the global population. While some symptoms can be managed with medications, these treatments often impact all muscles uniformly, not just the affected ones, leading to potential side effects including involuntary movements, confusion, and decreased short-term memory. Currently, there is no dedicated application for differentiating healthy muscles from abnormal ones. Existing analysis applications, designed for other purposes, often lack essential software engineering features such as a user-friendly interface, infrastructure independence, usability and learning ability, cloud computing capabilities, and AI-based assistance. This research proposes a computer-based methodology to analyze human motion and differentiate between healthy and unhealthy muscles. First, an IoT-based approach is proposed to digitize human motion using smartphones instead of hardly accessible wearable sensors and markers. The motion data is then simulated to analyze the neuromusculoskeletal system. An agent-driven modeling method ensures the naturalness, accuracy, and interpretability of the simulation, incorporating neuromuscular details such as Henneman's size principle, action potentials, motor units, and biomechanical principles. The results are then provided to medical and clinical experts to aid in differentiating between healthy and unhealthy muscles and for further investigation. Additionally, a deep learning-based ensemble framework is proposed to assist in the analysis of the simulation results, offering both accuracy and interpretability. A user-friendly graphical interface enhances the application's usability. Being fully cloud-based, the application is infrastructure-independent and can be accessed on smartphones, PCs, and other devices without installation. This strategy not only addresses the current challenges in treating motion disorders but also paves the way for other clinical simulations by considering both scientific and computational requirements.

RevDate: 2025-02-03

Papudeshi B, Roach MJ, Mallawaarachchi V, et al (2025)

Sphae: an automated toolkit for predicting phage therapy candidates from sequencing data.

Bioinformatics advances, 5(1):vbaf004.

MOTIVATION: Phage therapy offers a viable alternative for bacterial infections amid rising antimicrobial resistance. Its success relies on selecting safe and effective phage candidates that require comprehensive genomic screening to identify potential risks. However, this process is often labor intensive and time-consuming, hindering rapid clinical deployment.

RESULTS: We developed Sphae, an automated bioinformatics pipeline designed to streamline the therapeutic potential of a phage in under 10 minutes. Using Snakemake workflow manager, Sphae integrates tools for quality control, assembly, genome assessment, and annotation tailored specifically for phage biology. Sphae automates the detection of key genomic markers, including virulence factors, antimicrobial resistance genes, and lysogeny indicators such as integrase, recombinase, and transposase, which could preclude therapeutic use. Among the 65 phage sequences analyzed, 28 showed therapeutic potential, 8 failed due to low sequencing depth, 22 contained prophage or virulent markers, and 23 had multiple phage genomes. This workflow produces a report to assess phage safety and therapy suitability quickly. Sphae is scalable and portable, facilitating efficient deployment across most high-performance computing and cloud platforms, accelerating the genomic evaluation process.

Sphae source code is freely available at https://github.com/linsalrob/sphae, with installation supported on Conda, PyPi, Docker containers.

RevDate: 2025-02-03

Bensaid R, Labraoui N, Abba Ari AA, et al (2024)

SA-FLIDS: secure and authenticated federated learning-based intelligent network intrusion detection system for smart healthcare.

PeerJ. Computer science, 10:e2414.

Smart healthcare systems are gaining increased practicality and utility, driven by continuous advancements in artificial intelligence technologies, cloud and fog computing, and the Internet of Things (IoT). However, despite these transformative developments, challenges persist within IoT devices, encompassing computational constraints, storage limitations, and attack vulnerability. These attacks target sensitive health information, compromise data integrity, and pose obstacles to the overall resilience of the healthcare sector. To address these vulnerabilities, Network-based Intrusion Detection Systems (NIDSs) are crucial in fortifying smart healthcare networks and ensuring secure use of IoMT-based applications by mitigating security risks. Thus, this article proposes a novel Secure and Authenticated Federated Learning-based NIDS framework using Blockchain (SA-FLIDS) for fog-IoMT-enabled smart healthcare systems. Our research aims to improve data privacy and reduce communication costs. Furthermore, we also address weaknesses in decentralized learning systems, like Sybil and Model Poisoning attacks. We leverage the blockchain-based Self-Sovereign Identity (SSI) model to handle client authentication and secure communication. Additionally, we use the Trimmed Mean method to aggregate data. This helps reduce the effect of unusual or malicious inputs when creating the overall model. Our approach is evaluated on real IoT traffic datasets such as CICIoT2023 and EdgeIIoTset. It demonstrates exceptional robustness against adversarial attacks. These findings underscore the potential of our technique to improve the security of IoMT-based healthcare applications.

RevDate: 2025-02-03

Hoang TH, Fuhrman J, Klarqvist M, et al (2025)

Enabling end-to-end secure federated learning in biomedical research on heterogeneous computing environments with APPFLx.

Computational and structural biotechnology journal, 28:29-39.

Facilitating large-scale, cross-institutional collaboration in biomedical machine learning (ML) projects requires a trustworthy and resilient federated learning (FL) environment to ensure that sensitive information such as protected health information is kept confidential. Specifically designed for this purpose, this work introduces APPFLx - a low-code, easy-to-use FL framework that enables easy setup, configuration, and running of FL experiments. APPFLx removes administrative boundaries of research organizations and healthcare systems while providing secure end-to-end communication, privacy-preserving functionality, and identity management. Furthermore, it is completely agnostic to the underlying computational infrastructure of participating clients, allowing an instantaneous deployment of this framework into existing computing infrastructures. Experimentally, the utility of APPFLx is demonstrated in two case studies: (1) predicting participant age from electrocardiogram (ECG) waveforms, and (2) detecting COVID-19 disease from chest radiographs. Here, ML models were securely trained across heterogeneous computing resources, including a combination of on-premise high-performance computing and cloud computing facilities. By securely unlocking data from multiple sources for training without directly sharing it, these FL models enhance generalizability and performance compared to centralized training models while ensuring data remains protected. In conclusion, APPFLx demonstrated itself as an easy-to-use framework for accelerating biomedical studies across organizations and healthcare systems on large datasets while maintaining the protection of private medical data.

RevDate: 2025-02-03

Zheng X, Z Weng (2025)

Design of an enhanced feature point matching algorithm utilizing 3D laser scanning technology for sculpture design.

PeerJ. Computer science, 11:e2628.

As the aesthetic appreciation for art continues to grow, there is an increased demand for precision and detailed control in sculptural works. The advent of 3D laser scanning technology introduces transformative new tools and methodologies for refining correction systems in sculpture design. This article proposes a feature point matching algorithm based on fragment measurement and the iterative closest point (ICP) methodology, leveraging 3D laser scanning technology, namely Fragment Measurement Iterative Closest Point Feature Point Matching (FM-ICP-FPM). The FM-ICP-FPM approach uses the overlapping area of the two sculpture perspectives as a reference for attaching feature points. It employs the 3D measurement system to capture physical point cloud data from the two surfaces to enable the initial alignment of feature points. Feature vectors are generated by segmenting the region around the feature points and computing the intra-block gradient histogram. Subsequently, distance threshold conditions are set based on the constructed feature vectors and the preliminary feature point matches established during the coarse alignment to achieve precise feature point matching. Experimental results demonstrate the exceptional performance of the FM-ICP-FPM algorithm, achieving a sampling interval of 200. The correct matching rate reaches an impressive 100%, while the mean translation error (MTE) is a mere 154 mm, and the mean rotation angle error (MRAE) is 0.065 degrees. The indicator represents the degree of deviation in translation and rotation of the registered model, respectively. These low error values demonstrate that the FM-ICP-FPM algorithm excels in registration accuracy and can generate highly consistent three-dimensional models.

RevDate: 2025-02-03

Alrowais F, Arasi MA, Alotaibi SS, et al (2025)

Deep gradient reinforcement learning for music improvisation in cloud computing framework.

PeerJ. Computer science, 11:e2265.

Artificial intelligence (AI) in music improvisation offers promising new avenues for developing human creativity. The difficulty of writing dynamic, flexible musical compositions in real time is discussed in this article. We explore using reinforcement learning (RL) techniques to create more interactive and responsive music creation systems. Here, the musical structures train an RL agent to navigate the complex space of musical possibilities to provide improvisations. The melodic framework in the input musical data is initially identified using bi-directional gated recurrent units. The lyrical concepts such as notes, chords, and rhythms from the recognised framework are transformed into a format suitable for RL input. The deep gradient-based reinforcement learning technique used in this research formulates a reward system that directs the agent to compose aesthetically intriguing and harmonically cohesive musical improvisations. The improvised music is further rendered in the MIDI format. The Bach Chorales dataset with six different attributes relevant to musical compositions is employed in implementing the present research. The model was set up in a containerised cloud environment and controlled for smooth load distribution. Five different parameters, such as pitch frequency (PF), standard pitch delay (SPD), average distance between peaks (ADP), note duration gradient (NDG) and pitch class gradient (PCG), are leveraged to assess the quality of the improvised music. The proposed model obtains +0.15 of PF, -0.43 of SPD, -0.07 of ADP and 0.0041 NDG, which is a better value than other improvisation methods.

RevDate: 2025-01-31

Gadde RSK, Devaguptam S, Ren F, et al (2025)

Chatbot-assisted quantum chemistry for explicitly solvated molecules.

Chemical science [Epub ahead of print].

Advanced computational chemistry software packages have transformed chemical research by leveraging quantum chemistry and molecular simulations. Despite their capabilities, the complicated design and the requirement for specialized computing hardware hinder their applications in the broad chemistry community. Here, we introduce AutoSolvateWeb, a chatbot-assisted computational platform that addresses both challenges simultaneously. This platform employs a user-friendly chatbot interface to guide non-experts through a multistep procedure involving various computational packages, enabling them to configure and execute complex quantum mechanical/molecular mechanical (QM/MM) simulations of explicitly solvated molecules. Moreover, this platform operates on cloud infrastructure, allowing researchers to run simulations without hardware configuration challenges. As a proof of concept, AutoSolvateWeb demonstrates that combining virtual agents with cloud computing can democratize access to sophisticated computational research tools.

RevDate: 2025-01-28

Rateb R, Hadi AA, Tamanampudi VM, et al (2025)

An optimal workflow scheduling in IoT-fog-cloud system for minimizing time and energy.

Scientific reports, 15(1):3607.

Today, with the increasing use of the Internet of Things (IoT) in the world, various workflows that need to be stored and processed on the computing platforms. But this issue, causes an increase in costs for computing resources providers, and as a result, system Energy Consumption (EC) is also reduced. Therefore, this paper examines the workflow scheduling problem of IoT devices in the fog-cloud environment, where reducing the EC of the computing system and reducing the MakeSpan Time (MST) of workflows as main objectives, under the constraints of priority, deadline and reliability. Therefore, in order to achieve these objectives, the combination of Aquila and Salp Swarm Algorithms (ASSA) is used to select the best Virtual Machines (VMs) for the execution of workflows. So, in each iteration of ASSA execution, a number of VMs are selected by the ASSA. Then by using the Reducing MakeSpan Time (RMST) technique, the MST of the workflow on selected VMs is reduced, while maintaining reliability and deadline. Then, using VM merging and Dynamic Voltage Frequency Scaling (DVFS) technique on the output from RMST, the static and dynamic EC is reduced, respectively. Experimental results show the effectiveness of the proposed method compared to previous methods.

RevDate: 2025-01-28

Bai Y, Zhao H, Shi X, et al (2025)

Towards practical and privacy-preserving CNN inference service for cloud-based medical imaging analysis: A homomorphic encryption-based approach.

Computer methods and programs in biomedicine, 261:108599 pii:S0169-2607(25)00016-1 [Epub ahead of print].

BACKGROUND AND OBJECTIVE: Cloud-based Deep Learning as a Service (DLaaS) has transformed biomedicine by enabling healthcare systems to harness the power of deep learning for biomedical data analysis. However, privacy concerns emerge when sensitive user data must be transmitted to untrusted cloud servers. Existing privacy-preserving solutions are hindered by significant latency issues, stemming from the computational complexity of inner product operations in convolutional layers and the high communication costs of evaluating nonlinear activation functions. These limitations make current solutions impractical for real-world applications.

METHODS: In this paper, we address the challenges in mobile cloud-based medical imaging analysis, where users aim to classify private body-related radiological images using a Convolutional Neural Network (CNN) model hosted on a cloud server while ensuring data privacy for both parties. We propose PPCNN, a practical and privacy-preserving framework for CNN Inference. It introduces a novel mixed protocol that combines a low-expansion homomorphic encryption scheme with the noise-based masking method. Our framework is designed based on three key ideas: (1) optimizing computation costs by shifting unnecessary and expensive homomorphic multiplication operations to the offline phase, (2) introducing a coefficient-aware packing method to enable efficient homomorphic operations during the linear layer of the CNN, and (3) employing data masking techniques for nonlinear operations of the CNN to reduce communication costs.

RESULTS: We implemented PPCNN and evaluated its performance on three real-world radiological image datasets. Experimental results show that PPCNN outperforms state-of-the-art methods in mobile cloud scenarios, achieving superior response times and lower usage costs.

CONCLUSIONS: This study introduces an efficient and privacy-preserving framework for cloud-based medical imaging analysis, marking a significant step towards practical, secure, and trustworthy AI-driven healthcare solutions.

RevDate: 2025-01-28
CmpDate: 2025-01-28

Oh S, S Lee (2025)

Rehabilomics Strategies Enabled by Cloud-Based Rehabilitation: Scoping Review.

Journal of medical Internet research, 27:e54790 pii:v27i1e54790.

BACKGROUND: Rehabilomics, or the integration of rehabilitation with genomics, proteomics, metabolomics, and other "-omics" fields, aims to promote personalized approaches to rehabilitation care. Cloud-based rehabilitation offers streamlined patient data management and sharing and could potentially play a significant role in advancing rehabilomics research. This study explored the current status and potential benefits of implementing rehabilomics strategies through cloud-based rehabilitation.

OBJECTIVE: This scoping review aimed to investigate the implementation of rehabilomics strategies through cloud-based rehabilitation and summarize the current state of knowledge within the research domain. This analysis aims to understand the impact of cloud platforms on the field of rehabilomics and provide insights into future research directions.

METHODS: In this scoping review, we systematically searched major academic databases, including CINAHL, Embase, Google Scholar, PubMed, MEDLINE, ScienceDirect, Scopus, and Web of Science to identify relevant studies and apply predefined inclusion criteria to select appropriate studies. Subsequently, we analyzed 28 selected papers to identify trends and insights regarding cloud-based rehabilitation and rehabilomics within this study's landscape.

RESULTS: This study reports the various applications and outcomes of implementing rehabilomics strategies through cloud-based rehabilitation. In particular, a comprehensive analysis was conducted on 28 studies, including 16 (57%) focused on personalized rehabilitation and 12 (43%) on data security and privacy. The distribution of articles among the 28 studies based on specific keywords included 3 (11%) on the cloud, 4 (14%) on platforms, 4 (14%) on hospitals and rehabilitation centers, 5 (18%) on telehealth, 5 (18%) on home and community, and 7 (25%) on disease and disability. Cloud platforms offer new possibilities for data sharing and collaboration in rehabilomics research, underpinning a patient-centered approach and enhancing the development of personalized therapeutic strategies.

CONCLUSIONS: This scoping review highlights the potential significance of cloud-based rehabilomics strategies in the field of rehabilitation. The use of cloud platforms is expected to strengthen patient-centered data management and collaboration, contributing to the advancement of innovative strategies and therapeutic developments in rehabilomics.

RevDate: 2025-01-27

Roth I, O Cohen (2025)

The use of an automatic remote weight management system to track treatment response, identified drugs supply shortage and its consequences: A pilot study.

Digital health, 11:20552076251314090.

OBJECTIVE: The objective of this pilot study is to evaluate the feasibility of using an automatic weight management system to follow patients' response to weight reduction medications and to identify early deviations from weight trajectories.

METHODS: The pilot study involved 11 participants using Semaglutide for weight management, monitored over a 12-month period. A cloud-based, Wi-Fi-enabled remote weight management system collected and analyzed daily weight data from smart scales. The system's performance was evaluated during a period marked by a Semaglutide supply shortage.

RESULTS: Participants achieved a cumulative weight loss of 85 kg until a supply shortage-induced trough in October 2022. This was followed by a 6-8 week plateau and a subsequent 13 kg cumulative weight gain. The study demonstrated the feasibility of digitally monitoring weight without attrition over 12 months and highlighted the impact of anti-obesity drug (AOD) supply constraints on weight trajectories.

CONCLUSIONS: The remote weight management system proved important for improving clinic efficacy and identifying trends impacting obesity outcomes through electronic data monitoring. The system's potential in increasing medication compliance and enhancing overall clinical outcomes warrants further research, particularly in light of the challenges posed by AOD supply fluctuations.

RevDate: 2025-01-26

Fang C, Song K, Yan Z, et al (2025)

Monitoring phycocyanin in global inland waters by remote sensing: Progress and future developments.

Water research, 275:123176 pii:S0043-1354(25)00090-9 [Epub ahead of print].

Cyanobacterial blooms are increasingly becoming major threats to global inland aquatic ecosystems. Phycocyanin (PC), a pigment unique to cyanobacteria, can provide important reference for the study of cyanobacterial blooms warning. New satellite technology and cloud computing platforms have greatly improved research on PC, with the average number of studies examining it having increased from 5 per year before 2018 to 17 per year thereafter. Many empirical, semi-empirical, semi-analytical, quasi-analytical algorithm (QAA) and machine learning (ML) algorithms have been developed based on unique absorption characteristics of PC at approximately 620 nm. However, most models have been developed for individual lakes or clusters of them in specific regions, and their applicability at greater spatial scales requires evaluation. A review of optical mechanisms, principles and advantages and disadvantages of different model types, performance advantages and disadvantages of mainstream sensors in PC remote sensing inversion, and an evaluation of global lacustrine PC datasets is needed. We examine 230 articles from the Web of Science citation database between 1900 and 2024, summarize 57 of them that deal with construction of PC inversion models, and compile a list of 6526 PC sampling sites worldwide. This review proposed the key to achieving global lacustrine PC remote sensing inversion and spatiotemporal evolution analysis is to fully use existing multi-source remote sensing big data platforms, and a deep combination of ML and optical mechanisms, to classify the object lakes in advance based on lake optical characteristics, eutrophication level, water depth, climate type, altitude, population density within the watershed. Additionally, integrating data from multi-source satellite sensors, ground-based observations, and unmanned aerial vehicles, will enable future development of global lacustrine PC remote estimation, and contribute to achieving United Nations Sustainable Development Goals inland water goals.

RevDate: 2025-01-25

Mennilli R, Mazza L, A Mura (2025)

Integrating Machine Learning for Predictive Maintenance on Resource-Constrained PLCs: A Feasibility Study.

Sensors (Basel, Switzerland), 25(2): pii:s25020537.

This study investigates the potential of deploying a neural network model on an advanced programmable logic controller (PLC), specifically the Finder Opta™, for real-time inference within the predictive maintenance framework. In the context of Industry 4.0, edge computing aims to process data directly on local devices rather than relying on a cloud infrastructure. This approach minimizes latency, enhances data security, and reduces the bandwidth required for data transmission, making it ideal for industrial applications that demand immediate response times. Despite the limited memory and processing power inherent to many edge devices, this proof-of-concept demonstrates the suitability of the Finder Opta™ for such applications. Using acoustic data, a convolutional neural network (CNN) is deployed to infer the rotational speed of a mechanical test bench. The findings underscore the potential of the Finder Opta™ to support scalable and efficient predictive maintenance solutions, laying the groundwork for future research in real-time anomaly detection. By enabling machine learning capabilities on compact, resource-constrained hardware, this approach promises a cost-effective, adaptable solution for diverse industrial environments.

RevDate: 2025-01-25

Gu X, Duan Z, Ye G, et al (2025)

Virtual Node-Driven Cloud-Edge Collaborative Resource Scheduling for Surveillance with Visual Sensors.

Sensors (Basel, Switzerland), 25(2): pii:s25020535.

For public security purposes, distributed surveillance systems are widely deployed in key areas. These systems comprise visual sensors, edge computing boxes, and cloud servers. Resource scheduling algorithms are critical to ensure such systems' robustness and efficiency. They balance workloads and need to meet real-time monitoring and emergency response requirements. Existing works have primarily focused on optimizing Quality of Service (QoS), latency, and energy consumption in edge computing under resource constraints. However, the issue of task congestion due to insufficient physical resources has been rarely investigated. In this paper, we tackle the challenges posed by large workloads and limited resources in the context of surveillance with visual sensors. First, we introduce the concept of virtual nodes for managing resource shortages, referred to as virtual node-driven resource scheduling. Then, we propose a convex-objective integer linear programming (ILP) model based on this concept and demonstrate its efficiency. Additionally, we propose three alternative virtual node-driven scheduling algorithms, the extension of a random algorithm, a genetic algorithm, and a heuristic algorithm, respectively. These algorithms serve as benchmarks for comparison with the proposed ILP model. Experimental results show that all the scheduling algorithms can effectively address the challenge of offloading multiple priority tasks under resource constraints. Furthermore, the ILP model shows the best scheduling performance among them.

RevDate: 2025-01-24
CmpDate: 2025-01-24

Alsahfi T, Badshah A, Aboulola OI, et al (2025)

Optimizing healthcare big data performance through regional computing.

Scientific reports, 15(1):3129.

The healthcare sector is experiencing a digital transformation propelled by the Internet of Medical Things (IOMT), real-time patient monitoring, robotic surgery, Electronic Health Records (EHR), medical imaging, and wearable technologies. This proliferation of digital tools generates vast quantities of healthcare data. Efficient and timely analysis of this data is critical for enhancing patient outcomes and optimizing care delivery. Real-time processing of Healthcare Big Data (HBD) offers significant potential for improved diagnostics, continuous monitoring, and effective surgical interventions. However, conventional cloud-based processing systems face challenges due to the sheer volume and time-sensitive nature of this data. The migration of large datasets to centralized cloud infrastructures often results in latency, which impedes real-time applications. Furthermore, network congestion exacerbates these challenges, delaying access to vital insights necessary for informed decision-making. Such limitations hinder healthcare professionals from fully leveraging the capabilities of emerging technologies and big data analytics. To mitigate these issues, this paper proposes a Regional Computing (RC) paradigm for the management of HBD. The RC framework establishes strategically positioned regional servers capable of regionally collecting, processing, and storing medical data, thereby reducing dependence on centralized cloud resources, especially during peak usage periods. This innovative approach effectively addresses the constraints of traditional cloud processing, facilitating real-time data analysis at the regional level. Ultimately, it empowers healthcare providers with the timely information required to deliver data-driven, personalized care and optimize treatment strategies.

RevDate: 2025-01-24

Tang Y, Guo M, Li B, et al (2024)

Flexible Threshold Quantum Homomorphic Encryption on Quantum Networks.

Entropy (Basel, Switzerland), 27(1): pii:e27010007.

Currently, most quantum homomorphic encryption (QHE) schemes only allow a single evaluator (server) to accomplish computation tasks on encrypted data shared by the data owner (user). In addition, the quantum computing capability of the evaluator and the scope of quantum computation it can perform are usually somewhat limited, which significantly reduces the flexibility of the scheme in quantum network environments. In this paper, we propose a novel (t,n)-threshold QHE (TQHE) network scheme based on the Shamir secret sharing protocol, which allows k(t≤k≤n) evaluators to collaboratively perform evaluation computation operations on each qubit within the shared encrypted sequence. Moreover, each evaluator, while possessing the ability to perform all single-qubit unitary operations, is able to perform arbitrary single-qubit gate computation task assigned by the data owner. We give a specific (3, 5)-threshold example, illustrating the scheme's correctness and feasibility, and simulate it on IBM quantum computing cloud platform. Finally, it is shown that the scheme is secure by analyzing encryption/decryption private keys, ciphertext quantum state sequences during transmission, plaintext quantum state sequence, and the result after computations on the plaintext quantum state sequence.

RevDate: 2025-01-24

Kwon K, Lee YJ, Chung S, et al (2025)

Full Body-Worn Textile-Integrated Nanomaterials and Soft Electronics for Real-Time Continuous Motion Recognition Using Cloud Computing.

ACS applied materials & interfaces [Epub ahead of print].

Recognizing human body motions opens possibilities for real-time observation of users' daily activities, revolutionizing continuous human healthcare and rehabilitation. While some wearable sensors show their capabilities in detecting movements, no prior work could detect full-body motions with wireless devices. Here, we introduce a soft electronic textile-integrated system, including nanomaterials and flexible sensors, which enables real-time detection of various full-body movements using the combination of a wireless sensor suit and deep-learning-based cloud computing. This system includes an array of a nanomembrane, laser-induced graphene strain sensors, and flexible electronics integrated with textiles for wireless detection of different body motions and workouts. With multiple human subjects, we demonstrate the system's performance in real-time prediction of eight different activities, including resting, walking, running, squatting, walking upstairs, walking downstairs, push-ups, and jump roping, with an accuracy of 95.3%. The class of technologies, integrated as full body-worn textile electronics and interactive pairing with smartwatches and portable devices, can be used in real-world applications such as ambulatory health monitoring via conjunction with smartwatches and feedback-enabled customized rehabilitation workouts.

RevDate: 2025-01-23

Novais JJM, Melo BMD, Neves Junior AF, et al (2025)

Online analysis of Amazon's soils through reflectance spectroscopy and cloud computing can support policies and the sustainable development.

Journal of environmental management, 375:124155 pii:S0301-4797(25)00131-8 [Epub ahead of print].

Analyzing soil in large and remote areas such as the Amazon River Basin (ARB) is unviable when it is entirely performed by wet labs using traditional methods due to the scarcity of labs and the significant workforce requirements, increasing costs, time, and waste. Remote sensing, combined with cloud computing, enhances soil analysis by modeling soil from spectral data and overcoming the limitations of traditional methods. We verified the potential of soil spectroscopy in conjunction with cloud-based computing to predict soil organic carbon (SOC) and particle size (sand, silt, and clay) content from the Amazon region. To this end, we request physicochemical attribute values determined by wet laboratory analyses of 211 soil samples from the ARB. These samples were submitted to spectroscopy Vis-NIR-SWIR in the laboratory. Two approaches modeled the soil attributes: M-I) cloud-computing-based using the Brazilian Soil Spectral Service (BraSpecS) platform, and M-II) computing-based in an offline environment using R programming language. Both methods used the Cubist machine learning algorithm for modeling. The coefficient of determination (R[2]), mean absolute error (MAE) and root mean squared error (RMSE) served as criteria for performance assessment. The soil attributes prediction was highly consistent, considering the measured and predicted by both approaches M-I and M-II. The M-II outperformed the M-I in predicting both particle size and SOC. For clay content, the offline model achieved an R[2] of 0.85, with an MAE of 86.16 g kg[-][1] and RMSE of 111.73 g kg[-][1], while the online model had an R[2] of 0.70, MAE of 111.73 g kg[-][1], and RMSE of 144.19 g kg[-][1]. For SOC, the offline model also showed better performance, with an R[2] of 0.81, MAE of 3.42 g kg[-][1], and RMSE of 4.57 g kg[-][1], compared to an R[2] of 0.72, MAE of 3.66 g kg[-][1], and RMSE of 5.53 g kg[-][1] for the M-I. Both modeling methods demonstrated the power of reflectance spectroscopy and cloud computing to survey soils in remote and large areas such as ARB. The synergetic use of these techniques can support policies and sustainable development.

RevDate: 2025-01-23
CmpDate: 2025-01-23

Seth M, Jalo H, Högstedt Å, et al (2025)

Technologies for Interoperable Internet of Medical Things Platforms to Manage Medical Emergencies in Home and Prehospital Care: Scoping Review.

Journal of medical Internet research, 27:e54470 pii:v27i1e54470.

BACKGROUND: The aging global population and the rising prevalence of chronic disease and multimorbidity have strained health care systems, driving the need for expanded health care resources. Transitioning to home-based care (HBC) may offer a sustainable solution, supported by technological innovations such as Internet of Medical Things (IoMT) platforms. However, the full potential of IoMT platforms to streamline health care delivery is often limited by interoperability challenges that hinder communication and pose risks to patient safety. Gaining more knowledge about addressing higher levels of interoperability issues is essential to unlock the full potential of IoMT platforms.

OBJECTIVE: This scoping review aims to summarize best practices and technologies to overcome interoperability issues in IoMT platform development for prehospital care and HBC.

METHODS: This review adheres to a protocol published in 2022. Our literature search followed a dual search strategy and was conducted up to August 2023 across 6 electronic databases: IEEE Xplore, PubMed, Scopus, ACM Digital Library, Sage Journals, and ScienceDirect. After the title, abstract, and full-text screening performed by 2 reviewers, 158 articles were selected for inclusion. To answer our 2 research questions, we used 2 models defined in the protocol: a 6-level interoperability model and a 5-level IoMT reference model. Data extraction and synthesis were conducted through thematic analysis using Dedoose. The findings, including commonly used technologies and standards, are presented through narrative descriptions and graphical representations.

RESULTS: The primary technologies and standards reported for interoperable IoMT platforms in prehospital care and HBC included cloud computing (19/30, 63%), representational state transfer application programming interfaces (REST APIs; 17/30, 57%), Wi-Fi (17/30, 57%), gateways (15/30, 50%), and JSON (14/30, 47%). Message queuing telemetry transport (MQTT; 7/30, 23%) and WebSocket (7/30, 23%) were commonly used for real-time emergency alerts, while fog and edge computing were often combined with cloud computing for enhanced processing power and reduced latencies. By contrast, technologies associated with higher interoperability levels, such as blockchain (2/30, 7%), Kubernetes (3/30, 10%), and openEHR (2/30, 7%), were less frequently reported, indicating a focus on lower level of interoperability in most of the included studies (17/30, 57%).

CONCLUSIONS: IoMT platforms that support higher levels of interoperability have the potential to deliver personalized patient care, enhance overall patient experience, enable early disease detection, and minimize time delays. However, our findings highlight a prevailing emphasis on lower levels of interoperability within the IoMT research community. While blockchain, microservices, Docker, and openEHR are described as suitable solutions in the literature, these technologies seem to be seldom used in IoMT platforms for prehospital care and HBC. Recognizing the evident benefit of cross-domain interoperability, we advocate a stronger focus on collaborative initiatives and technologies to achieve higher levels of interoperability.

RR2-10.2196/40243.

RevDate: 2025-01-20

Ali A, Hussain B, Hissan RU, et al (2025)

Examining the landscape transformation and temperature dynamics in Pakistan.

Scientific reports, 15(1):2575.

This study aims to examine the landscape transformation and temperature dynamics using multiple spectral indices. The processes of temporal fluctuations in the land surface temperature is strongly related to the morphological features of the area in which the temperature is determined, and the given factors significantly affect the thermal properties of the surface. This research is being conducted in Pakistan to identify the vegetation cover, water bodies, impervious surfaces, and land surface temperature using decadal remote sensing data with four intervals during 1993-2023 in the Mardan division, Khyber Pakhtunkhwa. To analyze the landscape transformation and temperature dynamics, the study used spectral indices including Land Surface Temperature, Normalized Difference Vegetation Index, Normalized Difference Water Index, Normalized Difference Built-up Index, and Normalized Difference Bareness Index by employing Google Earth Engine cloud computing platform. The results suggest that there are differences in the type of land surface temperature, ranging from 15.58 °C to 43.71 °C during the study period. Nevertheless, larger fluctuations in land surface temperature were found in the cover and protective forests of the study area, especially in the northwestern and southeastern parts of the system. These results highlighted the complexity of the relationship between land surface temperature and spectral indices regarding the need for spectral indices.

RevDate: 2025-01-18

Soman VK, V Natarajan (2025)

Crayfish optimization based pixel selection using block scrambling based encryption for secure cloud computing environment.

Scientific reports, 15(1):2406.

Cloud Computing (CC) is a fast emerging field that enables consumers to access network resources on-demand. However, ensuring a high level of security in CC environments remains a significant challenge. Traditional encryption algorithms are often inadequate in protecting confidential data, especially digital images, from complex cyberattacks. The increasing reliance on cloud storage and transmission of digital images has made it essential to develop strong security measures to stop unauthorized access and guarantee the integrity of sensitive information. This paper presents a novel Crayfish Optimization based Pixel Selection using Block Scrambling Based Encryption Approach (CFOPS-BSBEA) technique that offers a unique solution to improve security in cloud environments. By integrating steganography and encryption, the CFOPS-BSBEA technique provides a robust approach to secure digital images. Our key contribution lies in the development of a three-stage process that optimally selects pixels for steganography, encodes secret images using Block Scrambling Based Encryption, and embeds them in cover images. The CFOPS-BSBEA technique leverages the strengths of both steganography and encryption to provide a secure and effective approach to digital image protection. The Crayfish Optimization algorithm is used to select the most suitable pixels for steganography, ensuring that the secret image is embedded in a way that minimizes detection. The Block Scrambling Based Encryption algorithm is then used to encode the secret image, providing an additional layer of security. Experimental results show that the CFOPS-BSBEA technique outperforms existing models in terms of security performance. The proposed approach has significant implications for the secure storage and transmission of digital images in cloud environments, and its originality and novelty make it an attractive contribution to the field. Furthermore, the CFOPS-BSBEA technique has the potential to inspire further research in secure cloud computing environments, making the way for the development of more robust and efficient security measures.

RevDate: 2025-01-17

Kari Balakrishnan A, Chellaperumal A, Lakshmanan S, et al (2025)

A novel efficient data storage and data auditing in cloud environment using enhanced child drawing development optimization strategy.

Network (Bristol, England) [Epub ahead of print].

The optimization on the cloud-based data structures is carried out using Adaptive Level and Skill Rate-based Child Drawing Development Optimization algorithm (ALSR-CDDO). Also, the overall cost required in computing and communicating is reduced by optimally selecting these data structures by the ALSR-CDDO algorithm. The storage of the data in the cloud platform is performed using the Divide and Conquer Table (D&CT). The location table and the information table are generated using the D&CT method. The details, such as the file information, file ID, version number, and user ID, are all present in the information table. Every time data is deleted or updated, and its version number is modified. Whenever an update takes place using D&CT, the location table also gets upgraded. The information regarding the location of a file in the Cloud Service Provider (CSP) is given in the location table. Once the data is stored in the CSP, the auditing of the data is then performed on the stored data. Both dynamic and batch auditing are carried out on the stored data, even if it gets updated dynamically in the CSP. The security offered by the executed scheme is verified by contrasting it with other existing auditing schemes.

RevDate: 2025-01-15

Yan K, Yu X, Liu J, et al (2025)

HiQ-FPAR: A High-Quality and Value-added MODIS Global FPAR Product from 2000 to 2023.

Scientific data, 12(1):72.

The Fraction of Absorbed Photosynthetically Active Radiation (FPAR) is essential for assessing vegetation's photosynthetic efficiency and ecosystem energy balance. While the MODIS FPAR product provides valuable global data, its reliability is compromised by noise, particularly under poor observation conditions like cloud cover. To solve this problem, we developed the Spatio-Temporal Information Composition Algorithm (STICA), which enhances MODIS FPAR by integrating quality control, spatio-temporal correlations, and original FPAR values, resulting in the High-Quality FPAR (HiQ-FPAR) product. HiQ-FPAR shows superior accuracy compared to MODIS FPAR and Sensor-Independent FPAR (SI-FPAR), with RMSE values of 0.130, 0.154, and 0.146, respectively, and R[2] values of 0.722, 0.630, and 0.717. Additionally, HiQ-FPAR exhibits smoother time series in 52.1% of global areas, compared to 44.2% for MODIS. Available on Google Earth Engine and Zenodo, the HiQ-FPAR dataset offers 500 m and 5 km resolution at an 8-day interval from 2000 to 2023, supporting a wide range of FPAR applications.

RevDate: 2025-01-13

Rushton CE, Tate JE, Å Sjödin (2025)

A modern, flexible cloud-based database and computing service for real-time analysis of vehicle emissions data.

Urban informatics, 4(1):1.

In response to the demand for advanced tools in environmental monitoring and policy formulation, this work leverages modern software and big data technologies to enhance novel road transport emissions research. This is achieved by making data and analysis tools more widely available and customisable so users can tailor outputs to their requirements. Through the novel combination of vehicle emissions remote sensing and cloud computing methodologies, these developments aim to reduce the barriers to understanding real-driving emissions (RDE) across urban environments. The platform demonstrates the practical application of modern cloud-computing resources in overcoming the complex demands of air quality management and policy monitoring. This paper shows the potential of modern technological solutions to improve the accessibility of environmental data for policy-making and the broader pursuit of sustainable urban development. The web-application is publicly and freely available at https://cares-public-app.azurewebsites.net.

RevDate: 2025-01-11

Ahmed AA, Farhan K, Ninggal MIH, et al (2024)

Retrieving and Identifying Remnants of Artefacts on Local Devices Using Sync.com Cloud.

Sensors (Basel, Switzerland), 25(1): pii:s25010106.

Most current research in cloud forensics is focused on tackling the challenges encountered by forensic investigators in identifying and recovering artifacts from cloud devices. These challenges arise from the diverse array of cloud service providers as each has its distinct rules, guidelines, and requirements. This research proposes an investigation technique for identifying and locating data remnants in two main stages: artefact collection and evidence identification. In the artefacts collection stage, the proposed technique determines the location of the artefacts in cloud storage and collects them for further investigation in the next stage. In the evidence identification stage, the collected artefacts are investigated to identify the evidence relevant to the cybercrime currently being investigated. These two stages perform an integrated process for mitigating the difficulty of locating the artefacts and reducing the time of identifying the relevant evidence. The proposed technique is implemented and tested by applying a forensics investigation algorithm on Sync.com cloud storage using the Microsoft Windows 10 operating system.

RevDate: 2025-01-10

Hoyer I, Utz A, Hoog Antink C, et al (2025)

tinyHLS: a novel open source high level synthesis tool targeting hardware accelerators for artificial neural network inference.

Physiological measurement [Epub ahead of print].

OBJECTIVE: In recent years, wearable devices such as smartwatches and smart patches have revolutionized biosignal acquisition and analysis, particularly for monitoring electrocardiography (ECG). However, the limited power supply of these devices often precludes real-time data analysis on the patch itself.

APPROACH: This paper introduces a novel Python package, tinyHLS (High Level Synthesis), designed to address these challenges by converting Python-based AI models into platform-independent hardware description language (HDL) code accelerators. Specifically designed for convolutional neural networks (CNNs), tinyHLS seamlessly integrates into the AI developer's workflow in Python TensorFlow Keras. Our methodology leverages a template-based hardware compiler that ensures flexibility, efficiency, and ease of use. In this work, tinyHLS is first-published featuring templates for several layers of neural networks, such as dense, convolution, max and global average pooling. In the first version, rectified linear unit (ReLU) is supported as activation. It targets one-dimensional data, with a particular focus on time series data.

MAIN RESULTS: The generated accelerators are validated in detecting atrial fibrillation (AF) on electrocardiogram (ECG) data, demonstrating significant improvements in processing speed (62-fold) and energy efficiency (4.5-fold). Quality of code and synthesizability are ensured by validating the outputs with commercial ASIC design tools.

SIGNIFICANCE: Importantly, tinyHLS is open-source and does not rely on commercial tools, making it a versatile solution for both academic and commercial applications. The paper also discusses the integration with an open-source RISCV and potential for future enhancements of tinyHLS, including its application in edge servers and cloud computing. The source code is available on GitHub: https://github.com/Fraunhofer-IMS/tinyHLS.

RevDate: 2025-01-10

Scales C, Bai J, Murakami D, et al (2025)

Internal validation of a convolutional neural network pipeline for assessing meibomian gland structure from meibography.

Optometry and vision science : official publication of the American Academy of Optometry pii:00006324-990000000-00246 [Epub ahead of print].

SIGNIFICANCE: Optimal meibography utilization and interpretation are hindered due to poor lid presentation, blurry images, or image artifacts and the challenges of applying clinical grading scales. These results, using the largest image dataset analyzed to date, demonstrate development of algorithms that provide standardized, real-time inference that addresses all of these limitations.

PURPOSE: This study aimed to develop and validate an algorithmic pipeline to automate and standardize meibomian gland absence assessment and interpretation.

METHODS: A total of 143,476 images were collected from sites across North America. Ophthalmologist and optometrist experts established ground-truth image quality and quantification (i.e., degree of gland absence). Annotated images were allocated into training, validation, and test sets. Convolutional neural networks within Google Cloud VertexAI trained three locally deployable or edge-based predictive models: image quality detection, over-flip detection, and gland absence detection. The algorithms were combined into an algorithmic pipeline onboard a LipiScan Dynamic Meibomian Imager to provide real-time clinical inference for new images. Performance metrics were generated for each algorithm in the pipeline onboard the LipiScan from naive image test sets.

RESULTS: Individual model performance metrics included the following: weighted average precision (image quality detection: 0.81, over-flip detection: 0.88, gland absence detection: 0.84), weighted average recall (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), weighted average F1 score (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.81), overall accuracy (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), Cohen κ (image quality detection: 0.60, over-flip detection: 0.62, and gland absence detection: 0.71), Kendall τb (image quality detection: 0.61, p<0.001, over-flip detection: 0.63, p<0.001, and gland absence detection: 0.67, p<001), and Matthews coefficient (image quality detection: 0.61, over-flip detection: 0.63, and gland absence detection: 0.62). Area under the precision-recall curve (image quality detection: 0.87 over-flip detection: 0.92, gland absence detection: 0.89) and area under the receiver operating characteristic curve (image quality detection: 0.88, over-flip detection: 0.91 gland absence detection: 0.93) were calculated across a common set of thresholds, ranging from 0 to 1.

CONCLUSIONS: Comparison of predictions from each model to expert panel ground-truth demonstrated strong association and moderate to substantial agreement. The findings and performance metrics show that the pipeline of algorithms provides standardized, real-time inference/prediction of meibomian gland absence.

RevDate: 2025-01-10
CmpDate: 2025-01-10

Lu C, Zhou J, Q Zou (2025)

An optimized approach for container deployment driven by a two-stage load balancing mechanism.

PloS one, 20(1):e0317039 pii:PONE-D-24-28787.

Lightweight container technology has emerged as a fundamental component of cloud-native computing, with the deployment of containers and the balancing of loads on virtual machines representing significant challenges. This paper presents an optimization strategy for container deployment that consists of two stages: coarse-grained and fine-grained load balancing. In the initial stage, a greedy algorithm is employed for coarse-grained deployment, facilitating the distribution of container services across virtual machines in a balanced manner based on resource requests. The subsequent stage utilizes a genetic algorithm for fine-grained resource allocation, ensuring an equitable distribution of resources to each container service on a single virtual machine. This two-stage optimization enhances load balancing and resource utilization throughout the system. Empirical results indicate that this approach is more efficient and adaptable in comparison to the Grey Wolf Optimization (GWO) Algorithm, the Simulated Annealing (SA) Algorithm, and the GWO-SA Algorithm, significantly improving both resource utilization and load balancing performance on virtual machines.

RevDate: 2025-01-09
CmpDate: 2025-01-09

Kuang Y, Cao D, Jiang D, et al (2024)

CPhaMAS: The first pharmacokinetic analysis cloud platform developed by China.

Zhong nan da xue xue bao. Yi xue ban = Journal of Central South University. Medical sciences, 49(8):1290-1300.

OBJECTIVES: Software for pharmacological modeling and statistical analysis is essential for drug development and individualized treatment modeling. This study aims to develop a pharmacokinetic analysis cloud platform that leverages cloud-based benefits, offering a user-friendly interface with a smoother learning curve.

METHODS: The platform was built using Rails as the framework, developed in Julia language, and employs PostgreSQL 14 database, Redis cache, and Sidekiq for asynchronous task management. Four commonly used modules in clinical pharmacology research were developed: Non-compartmental analysis, bioequivalence/bioavailability analysis, compartment model analysis, and population pharmacokinetics modeling. The platform ensured comprehensive data security and traceability through multiple safeguards, including data encryption, access control, transmission encryption, redundant backups, and log management. The platform underwent basic function, performance, reliability, usability, and scalability testing, along with practical case studies.

RESULTS: The CPhaMAS cloud platform successfully implemented the 4 module functionalities. The platform provides a list-based navigation for users, featuring checkbox-style interactions. Through cloud computing, it allows direct online data analysis, saving computer storage and minimizing performance requirements. Modeling and visualization do not require programming knowledge. Basic functionality achieved 100% completion, with an average annual uptime of over 99%. Server response time was between 200 to 500 ms, and average CPU usage was maintained below 30%. In a practical case study, cefotaxime sodium/tazobactam sodium injection (6꞉1 ratio) displayd near-linear pharmacokinetics within a dose range of 1.0 to 4.0 g, with no significant effect of tazobactam on the pharmacokinetic parameters of cefotaxime, validating the platform's usability and reliability.

CONCLUSIONS: CPhaMAS provides an integrated modeling and statistical tool for educators, researchers, and industrial professionals, enabling non-compartmental analysis, bioequivalence/bioavailability analysis, compartmental model building, and population pharmacokinetic modeling and simulation.

RevDate: 2025-01-09

Peng W, Hong Y, Chen Y, et al (2025)

AIScholar: An OpenFaaS-enhanced cloud platform for intelligent medical data analytics.

Computers in biology and medicine, 186:109648 pii:S0010-4825(24)01733-5 [Epub ahead of print].

This paper presents AIScholar, an intelligent research cloud platform developed based on artificial intelligence analysis methods and the OpenFaaS serverless framework, designed for intelligent analysis of clinical medical data with high scalability. AIScholar simplifies the complex analysis process by encapsulating a wide range of medical data analytics methods into a series of customizable cloud tools that emphasize ease of use and expandability, within OpenFaaS's serverless computing framework. As a multifaceted auxiliary tool in medical scientific exploration, AIScholar accelerates the deployment of computational resources, enabling clinicians and scientific personnel to derive new insights from clinical medical data with unprecedented efficiency. A case study focusing on breast cancer clinical data underscores the practicality that AIScholar offers to clinicians for diagnosis and decision-making. Insights generated by the platform have a direct impact on the physicians' ability to identify and address clinical issues, signifying its real-world application significance in clinical practice. Consequently, AIScholar makes a meaningful impact on medical research and clinical practice by providing powerful analytical tools to clinicians and scientific personnel, thereby promoting significant advancements in the analysis of clinical medical data.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Nolasco M, M Balzarini (2025)

Assessment of temporal aggregation of Sentinel-2 images on seasonal land cover mapping and its impact on landscape metrics.

Environmental monitoring and assessment, 197(2):142.

Landscape metrics (LM) play a crucial role in fields such as urban planning, ecology, and environmental research, providing insights into the ecological and functional dynamics of ecosystems. However, in dynamic systems, generating thematic maps for LM analysis poses challenges due to the substantial data volume required and issues such as cloud cover interruptions. The aim of this study was to compare the accuracy of land cover maps produced by three temporal aggregation methods: median reflectance, maximum normalised difference vegetation index (NDVI), and a two-date image stack using Sentinel-2 (S2) and then to analyse their implications for LM calculation. The Google Earth Engine platform facilitated data filtering, image selection, and aggregation. A random forest algorithm was employed to classify five land cover classes across ten sites, with classification accuracy assessed using global measurements and the Kappa index. LM were then quantified. The analysis revealed that S2 data provided a high-quality, cloud-free dataset suitable for analysis, ensuring a minimum of 25 cloud-free pixels over the study period. The two-date and median methods exhibited superior land cover classification accuracy compared to the max NDVI method. In particular, the two-date method resulted in lower fragmentation-heterogeneity and complexity metrics in the resulting maps compared to the median and max NDVI methods. Nevertheless, the median method holds promise for integration into operational land cover mapping programmes, particularly for larger study areas exceeding the width of S2 swath coverage. We find patch density combined with conditional entropy to be particularly useful metrics for assessing fragmentation and configuration complexity.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Saeed A, A Khan M, Akram U, et al (2025)

Deep learning based approaches for intelligent industrial machinery health management and fault diagnosis in resource-constrained environments.

Scientific reports, 15(1):1114.

Industry 4.0 represents the fourth industrial revolution, which is characterized by the incorporation of digital technologies, the Internet of Things (IoT), artificial intelligence, big data, and other advanced technologies into industrial processes. Industrial Machinery Health Management (IMHM) is a crucial element, based on the Industrial Internet of Things (IIoT), which focuses on monitoring the health and condition of industrial machinery. The academic community has focused on various aspects of IMHM, such as prognostic maintenance, condition monitoring, estimation of remaining useful life (RUL), intelligent fault diagnosis (IFD), and architectures based on edge computing. Each of these categories holds its own significance in the context of industrial processes. In this survey, we specifically examine the research on RUL prediction, edge-based architectures, and intelligent fault diagnosis, with a primary focus on the domain of intelligent fault diagnosis. The importance of IFD methods in ensuring the smooth execution of industrial processes has become increasingly evident. However, most methods are formulated under the assumption of complete, balanced, and abundant data, which often does not align with real-world engineering scenarios. The difficulties linked to these classifications of IMHM have received noteworthy attention from the research community, leading to a substantial number of published papers on the topic. While there are existing comprehensive reviews that address major challenges and limitations in this field, there is still a gap in thoroughly investigating research perspectives across RUL prediction, edge-based architectures, and complete intelligent fault diagnosis processes. To fill this gap, we undertake a comprehensive survey that reviews and discusses research achievements in this domain, specifically focusing on IFD. Initially, we classify the existing IFD methods into three distinct perspectives: the method of processing data, which aims to optimize inputs for the intelligent fault diagnosis model and mitigate limitations in the training sample set; the method of constructing the model, which involves designing the structure and features of the model to enhance its resilience to challenges; and the method of optimizing training, which focuses on refining the training process for intelligent fault diagnosis models and emphasizes the importance of ideal data in the training process. Subsequently, the survey covers techniques related to RUL prediction and edge-cloud architectures for resource-constrained environments. Finally, this survey consolidates the outlook on relevant issues in IMHM, explores potential solutions, and offers practical recommendations for further consideration.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Ibrahem UM, Alblaihed MA, Altamimi AB, et al (2024)

Cloud computing practice activities and mental capacity on developing reproductive health and cognitive absorption.

African journal of reproductive health, 28(12):186-200.

The current study aims to determine how the interactions between practice (distributed/focused) and mental capacity (high/low) in the cloud-computing environment (CCE) affect the development of reproductive health skills and cognitive absorption. The study employed an experimental design, and it included a categorical variable for mental capacity (low/high) and an independent variable with two types of activities (distributed/focused). The research sample consisted of 240 students from the College of Science and College of Applied Medical Sciences at the University of Hail's. The sample was divided into four experimental groups. The study's most significant findings were the CCE's apparent favoritism of the group that studied using focused practice style and high mental capacity in the reproductive health skills test, as opposed to distributed practice style and low mental capacity in cognitive absorption. The findings will add to the ongoing debate over which of the two distributed/focused practice activity models is more effective in achieving desired educational results.

RevDate: 2025-01-08

Nur A, Demise A, Y Muanenda (2024)

Design and Evaluation of a Cloud Computing System for Real-Time Measurements in Polarization-Independent Long-Range DAS Based on Coherent Detection.

Sensors (Basel, Switzerland), 24(24):.

CloudSim is a versatile simulation framework for modeling cloud infrastructure components that supports customizable and extensible application provisioning strategies, allowing for the simulation of cloud services. On the other hand, Distributed Acoustic Sensing (DAS) is a ubiquitous technique used for measuring vibrations over an extended region. Data handling in DAS remains an open issue, as many applications need continuous monitoring of a volume of samples whose storage and processing in real time require high-capacity memory and computing resources. We employ the CloudSim tool to design and evaluate a cloud computing scheme for long-range, polarization-independent DAS using coherent detection of Rayleigh backscattering signals and uncover valuable insights on the evolution of the processing times for a diverse range of Virtual Machine (VM) capacities as well as sizes of blocks of processed data. Our analysis demonstrates that the choice of VM significantly impacts computational times in real-time measurements in long-range DAS and that achieving polarization independence introduces minimal processing overheads in the system. Additionally, the increase in the block size of processed samples per cycle results in diminishing increments in overall processing times per batch of new samples added, demonstrating the scalability of cloud computing schemes in long-range DAS and its capability to manage larger datasets efficiently.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Khabti J, AlAhmadi S, A Soudani (2024)

Enhancing Deep-Learning Classification for Remote Motor Imagery Rehabilitation Using Multi-Subject Transfer Learning in IoT Environment.

Sensors (Basel, Switzerland), 24(24):.

One of the most promising applications for electroencephalogram (EEG)-based brain-computer interfaces (BCIs) is motor rehabilitation through motor imagery (MI) tasks. However, current MI training requires physical attendance, while remote MI training can be applied anywhere, facilitating flexible rehabilitation. Providing remote MI training raises challenges to ensuring an accurate recognition of MI tasks by healthcare providers, in addition to managing computation and communication costs. The MI tasks are recognized through EEG signal processing and classification, which can drain sensor energy due to the complexity of the data and the presence of redundant information, often influenced by subject-dependent factors. To address these challenges, we propose in this paper a multi-subject transfer-learning approach for an efficient MI training framework in remote rehabilitation within an IoT environment. For efficient implementation, we propose an IoT architecture that includes cloud/edge computing as a solution to enhance the system's efficiency and reduce the use of network resources. Furthermore, deep-learning classification with and without channel selection is applied in the cloud, while multi-subject transfer-learning classification is utilized at the edge node. Various transfer-learning strategies, including different epochs, freezing layers, and data divisions, were employed to improve accuracy and efficiency. To validate this framework, we used the BCI IV 2a dataset, focusing on subjects 7, 8, and 9 as targets. The results demonstrated that our approach significantly enhanced the average accuracy in both multi-subject and single-subject transfer-learning classification. In three-subject transfer-learning classification, the FCNNA model achieved up to 79.77% accuracy without channel selection and 76.90% with channel selection. For two-subject and single-subject transfer learning, the application of transfer learning improved the average accuracy by up to 6.55% and 12.19%, respectively, compared to classification without transfer learning. This framework offers a promising solution for remote MI rehabilitation, providing both accurate task recognition and efficient resource usage.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Barthelemy J, Iqbal U, Qian Y, et al (2024)

Safety After Dark: A Privacy Compliant and Real-Time Edge Computing Intelligent Video Analytics for Safer Public Transportation.

Sensors (Basel, Switzerland), 24(24):.

Public transportation systems play a vital role in modern cities, but they face growing security challenges, particularly related to incidents of violence. Detecting and responding to violence in real time is crucial for ensuring passenger safety and the smooth operation of these transport networks. To address this issue, we propose an advanced artificial intelligence (AI) solution for identifying unsafe behaviours in public transport. The proposed approach employs deep learning action recognition models and utilises technologies like NVIDIA DeepStream SDK, Amazon Web Services (AWS) DirectConnect, local edge computing server, ONNXRuntime and MQTT to accelerate the end-to-end pipeline. The solution captures video streams from remote train stations closed circuit television (CCTV) networks, processes the data in the cloud, applies the action recognition model, and transmits the results to a live web application. A temporal pyramid network (TPN) action recognition model was trained on a newly curated video dataset mixing open-source resources and live simulated trials to identify the unsafe behaviours. The base model was able to achieve a validation accuracy of 93% when trained using open-source dataset samples and was improved to 97% when live simulated dataset was included during the training. The developed AI system was deployed at Wollongong Train Station (NSW, Australia) and showcased impressive accuracy in detecting violence incidents during an 8-week test period, achieving a reliable false-positive (FP) rate of 23%. While the AI correctly identified 30 true-positive incidents, there were 6 cases of false negatives (FNs) where violence incidents were missed during the rainy weather suggesting more data in the training dataset related to bad weather. The AI model's continuous retraining capability ensures its adaptability to various real-world scenarios, making it a valuable tool for enhancing safety and the overall passenger experience in public transport settings.

RevDate: 2025-01-08

Li L, Zhu L, W Li (2024)

Cloud-Edge-End Collaborative Federated Learning: Enhancing Model Accuracy and Privacy in Non-IID Environments.

Sensors (Basel, Switzerland), 24(24):.

Cloud-edge-end computing architecture is crucial for large-scale edge data processing and analysis. However, the diversity of terminal nodes and task complexity in this architecture often result in non-independent and identically distributed (non-IID) data, making it challenging to balance data heterogeneity and privacy protection. To address this, we propose a privacy-preserving federated learning method based on cloud-edge-end collaboration. Our method fully considers the three-tier architecture of cloud-edge-end systems and the non-IID nature of terminal node data. It enhances model accuracy while protecting the privacy of terminal node data. The proposed method groups terminal nodes based on the similarity of their data distributions and constructs edge subnetworks for training in collaboration with edge nodes, thereby mitigating the negative impact of non-IID data. Furthermore, we enhance WGAN-GP with attention mechanism to generate balanced synthetic data while preserving key patterns from original datasets, reducing the adverse effects of non-IID data on global model accuracy while preserving data privacy. In addition, we introduce data resampling and loss function weighting strategies to mitigate model bias caused by imbalanced data distribution. Experimental results on real-world datasets demonstrate that our proposed method significantly outperforms existing approaches in terms of model accuracy, F1-score, and other metrics.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Cruz Castañeda WA, P Bertemes Filho (2024)

Improvement of an Edge-IoT Architecture Driven by Artificial Intelligence for Smart-Health Chronic Disease Management.

Sensors (Basel, Switzerland), 24(24):.

One of the health challenges in the 21st century is to rethink approaches to non-communicable disease prevention. A solution is a smart city that implements technology to make health smarter, enables healthcare access, and contributes to all residents' overall well-being. Thus, this paper proposes an architecture to deliver smart health. The architecture is anchored in the Internet of Things and edge computing, and it is driven by artificial intelligence to establish three foundational layers in smart care. Experimental results in a case study on glucose prediction noninvasively show that the architecture senses and acquires data that capture relevant characteristics. The study also establishes a baseline of twelve regression algorithms to assess the non-invasive glucose prediction performance regarding the mean squared error, root mean squared error, and r-squared score, and the catboost regressor outperforms the other models with 218.91 and 782.30 in MSE, 14.80 and 27.97 in RMSE, and 0.81 and 0.31 in R2, respectively, on training and test sets. Future research works involve extending the performance of the algorithms with new datasets, creating and optimizing embedded AI models, deploying edge-IoT with embedded AI for wearable devices, implementing an autonomous AI cloud engine, and implementing federated learning to deliver scalable smart health in a smart city context.

RevDate: 2025-01-08

Podgorelec D, Strnad D, Kolingerová I, et al (2024)

State-of-the-Art Trends in Data Compression: COMPROMISE Case Study.

Entropy (Basel, Switzerland), 26(12): pii:e26121032.

After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight. In this article, we try to critically evaluate the most prominent of these trends and to explore their parallels, complementarities, and differences. Digital data restoration mimics the human ability to omit memorising information that is satisfactorily retrievable from the context. Feature-based data compression introduces a two-level data representation with higher-level semantic features and with residuals that correct the feature-restored (predicted) data. The integration of the advantages of individual domain-specific data compression methods into a general approach is also challenging. To the best of our knowledge, a method that addresses all these trends does not exist yet. Our methodology, COMPROMISE, has been developed exactly to make as many solutions to these challenges as possible inter-operable. It incorporates features and digital restoration. Furthermore, it is largely domain-independent (general), asymmetric, and universal. The latter refers to the ability to compress data in a common framework in a lossy, lossless, and near-lossless mode. COMPROMISE may also be considered an umbrella that links many existing domain-dependent and independent methods, supports hybrid lossless-lossy techniques, and encourages the development of new data compression algorithms.

RevDate: 2025-01-06
CmpDate: 2025-01-07

Yang M, Zhu X, Yan F, et al (2025)

Digital-based emergency prevention and control system: enhancing infection control in psychiatric hospitals.

BMC medical informatics and decision making, 25(1):7.

BACKGROUND: The practical application of infectious disease emergency plans in mental health institutions during the ongoing pandemic has revealed significant shortcomings. These manifest as chaotic management of mental health care, a lack of hospital infection prevention and control (IPC) knowledge among medical staff, and unskilled practical operation. These factors result in suboptimal decision-making and emergency response execution. Consequently, we have developed a digital-based emergency prevention and control system to reinforce IPC management in psychiatric hospitals and enhance the hospital IPC capabilities of medical staff.

METHODS: The system incorporates modern technologies such as cloud computing, big data, streaming media, and knowledge graphs. A cloud service platform was established at the PaaS layer using Docker container technology to manage infectious disease emergency-related services. The system provides application services to various users through a Browser/Server Architecture. The system was implemented in a class A tertiary mental health center from March 1st, 2022, to February 28th, 2023. Twelve months of emergency IPC training and education were conducted based on the system. The system's functions and the users' IPC capabilities were evaluated.

RESULTS: A total of 116 employees participated in using the system. The system performance evaluation indicated that functionality (3.78 ± 0.68), practicality (4.02 ± 0.74), reliability (3.45 ± 0.50), efficiency (4.14 ± 0.69), accuracy (3.36 ± 0.58), and assessability (3.05 ± 0.47) met basic levels (> 3), with efficiency improvement and practicality achieving a good level (> 4). After 12 months of training and study based on the system, the participants demonstrated improved emergency knowledge (χ[2] = 37.69, p < 0.001) and skills (p < 0.001).

CONCLUSION: The findings of this study indicate that the digital-based emergency IPC system has the potential to enhance the emergency IPC knowledge base and operational skills of medical personnel in psychiatric hospitals. Furthermore, the medical personnel appear to be better adapted to the system. Consequently, the system has the capacity to facilitate the emergency IPC response of psychiatric institutions to infectious diseases, while simultaneously optimising the training and educational methodologies employed in emergency prevention and control. The promotion and application of this system in psychiatric institutions has the potential to accelerate the digitalisation and intelligence construction of psychiatric hospitals.

RevDate: 2025-01-07

Vandewinckele L, Benazzouz C, Delombaerde L, et al (2024)

Pro-active risk analysis of an in-house developed deep learning based autoplanning tool for breast Volumetric Modulated Arc Therapy.

Physics and imaging in radiation oncology, 32:100677.

BACKGROUND AND PURPOSE: With the increasing amount of in-house created deep learning models in radiotherapy, it is important to know how to minimise the risks associated with the local clinical implementation prior to clinical use. The goal of this study is to give an example of how to identify the risks and find mitigation strategies to reduce these risks in an implemented workflow containing a deep learning based planning tool for breast Volumetric Modulated Arc Therapy.

MATERIALS AND METHODS: The deep learning model ran on a private Google Cloud environment for adequate computational capacity and was integrated into a workflow that could be initiated within the clinical Treatment Planning System (TPS). A proactive Failure Mode and Effect Analysis (FMEA) was conducted by a multidisciplinary team, including physicians, physicists, dosimetrists, technologists, quality managers, and the research and development team. Failure modes categorised as 'Not acceptable' and 'Tolerable' on the risk matrix were further examined to find mitigation strategies.

RESULTS: In total, 39 failure modes were defined for the total workflow, divided over four steps. Of these, 33 were deemed 'Acceptable', five 'Tolerable', and one 'Not acceptable'. Mitigation strategies, such as a case-specific Quality Assurance report, additional scripted checks and properties, a pop-up window, and time stamp analysis, reduced the failure modes to two 'Tolerable' and none in the 'Not acceptable' region.

CONCLUSIONS: The pro-active risk analysis revealed possible risks in the implemented workflow and led to the implementation of mitigation strategies that decreased the risk scores for safer clinical use.

RevDate: 2025-01-06

Li S, Wan H, Yu Q, et al (2025)

Downscaling of ERA5 reanalysis land surface temperature based on attention mechanism and Google Earth Engine.

Scientific reports, 15(1):675.

Land Surface Temperature (LST) is widely recognized as a sensitive indicator of climate change, and it plays a significant role in ecological research. The ERA5-Land LST dataset, developed and managed by the European Centre for Medium-Range Weather Forecasts (ECMWF), is extensively used for global or regional LST studies. However, its fine-scale application is limited by its low spatial resolution. Therefore, to improve the spatial resolution of ERA5-Land LST data, this study proposes an Attention Mechanism U-Net (AMUN) method, which combines data acquisition and preprocessing on the Google Earth Engine (GEE) cloud computing platform, to downscale the hourly monthly mean reanalysis LST data of ERA5-Land across China's territory from 0.1° to 0.01°. This method comprehensively considers the relationship between the LST and surface features, organically combining multiple deep learning modules, includes the Global Multi-Factor Cross-Attention (GMFCA) module, the Feature Fusion Residual Dense Block (FFRDB) connection module, and the U-Net module. In addition, the Bayesian global optimization algorithm is used to select the optimal hyperparameters of the network in order to enhance the predictive performance of the model. Finally, the downscaling accuracy of the network was evaluated through simulated data experiments and real data experiments and compared with the Random Forest (RF) method. The results show that the network proposed in this study outperforms the RF method, with RMSE reduced by approximately 32-51%. The downscaling method proposed in this study can effectively improve the accuracy of ERA5-Land LST downscaling, providing new insights for LST downscaling research.

RevDate: 2025-01-04

Belbase P, Bhusal R, Ghimire SS, et al (2024)

Assuring assistance to healthcare and medicine: Internet of Things, Artificial Intelligence, and Artificial Intelligence of Things.

Frontiers in artificial intelligence, 7:1442254.

INTRODUCTION: The convergence of healthcare with the Internet of Things (IoT) and Artificial Intelligence (AI) is reshaping medical practice with promising enhanced data-driven insights, automated decision-making, and remote patient monitoring. It has the transformative potential of these technologies to revolutionize diagnosis, treatment, and patient care.

PURPOSE: This study aims to explore the integration of IoT and AI in healthcare, outlining their applications, benefits, challenges, and potential risks. By synthesizing existing literature, this study aims to provide insights into the current landscape of AI, IoT, and AIoT in healthcare, identify areas for future research and development, and establish a framework for the effective use of AI in health.

METHOD: A comprehensive literature review included indexed databases such as PubMed/Medline, Scopus, and Google Scholar. Key search terms related to IoT, AI, healthcare, and medicine were employed to identify relevant studies. Papers were screened based on their relevance to the specified themes, and eventually, a selected number of papers were methodically chosen for this review.

RESULTS: The integration of IoT and AI in healthcare offers significant advancements, including remote patient monitoring, personalized medicine, and operational efficiency. Wearable sensors, cloud-based data storage, and AI-driven algorithms enable real-time data collection, disease diagnosis, and treatment planning. However, challenges such as data privacy, algorithmic bias, and regulatory compliance must be addressed to ensure responsible deployment of these technologies.

CONCLUSION: Integrating IoT and AI in healthcare holds immense promise for improving patient outcomes and optimizing healthcare delivery. Despite challenges such as data privacy concerns and algorithmic biases, the transformative potential of these technologies cannot be overstated. Clear governance frameworks, transparent AI decision-making processes, and ethical considerations are essential to mitigate risks and harness the full benefits of IoT and AI in healthcare.

RevDate: 2025-01-01

Dommer J, Van Doorslaer K, Afrasiabi C, et al (2024)

PaVE 2.0: Behind the Scenes of the Papillomavirus Episteme.

Journal of molecular biology pii:S0022-2836(24)00555-2 [Epub ahead of print].

The Papilloma Virus Episteme (PaVE) https://pave.niaid.nih.gov/ was initiated by NIAID in 2008 to provide a highly curated bioinformatic and knowledge resource for the papillomavirus scientific community. It rapidly became the fundamental and core resource for papillomavirus researchers and clinicians worldwide. Over time, the software infrastructure became severely outdated. In PaVE 2.0, the underlying libraries and hosting platform have been completely upgraded and rebuilt using Amazon Web Services (AWS) tools and automated CI/CD (continuous integration and deployment) pipelines for deployment of the application and data (now in AWS S3 cloud storage). PaVE 2.0 is hosted on three AWS ECS (elastic container service) using the NIAID Operations & Engineering Branch's Monarch tech stack and terraform. A new Celery queue supports longer running tasks. The framework is Python Flask with a JavaScript/JINJA template front end, and the database switched from MySQL to Neo4j. A Swagger API (Application Programming Interface) performs database queries, and executes jobs for BLAST, MAFFT, and the L1 typing tooland will allow future programmatic data access. All major tools such as BLAST, the L1 typing tool, genome locus viewer, phylogenetic tree generator, multiple sequence alignment, and protein structure viewer were modernized and enhanced to support more users. Multiple sequence alignment uses MAFFT instead of COBALT. The protein structure viewer was changed from Jmol to Mol*, the new embeddable viewer used by RCSB (Research Collaboratory for Structural Bioinformatics). In summary, PaVE 2.0 allows us to continue to provide this essential resource with an open-source framework that could be used as a template for molecular biology databases of other viruses.

RevDate: 2025-01-04

Dugyala R, Chithaluru P, Ramchander M, et al (2024)

Secure cloud computing: leveraging GNN and leader K-means for intrusion detection optimization.

Scientific reports, 14(1):30906 pii:10.1038/s41598-024-81442-7.

Over the past two decades, cloud computing has experienced exponential growth, becoming a critical resource for organizations and individuals alike. However, this rapid adoption has introduced significant security challenges, particularly in intrusion detection, where traditional systems often struggle with low detection accuracy and high processing times. To address these limitations, this research proposes an optimized Intrusion Detection System (IDS) that leverages Graph Neural Networks and the Leader K-means clustering algorithm. The primary aim of the study is to enhance both the accuracy and efficiency of intrusion detection within cloud environments. Key contributions of this work include the integration of the Leader K-means algorithm for effective data clustering, improving the IDS's ability to differentiate between normal and malicious activities. Additionally, the study introduces an optimized Grasshopper Optimization algorithm, which enhances the performance of the Optimal Neural Network, further refining detection accuracy. For added data security, the system incorporates Advanced Encryption Standard encryption and steganography, ensuring robust protection of sensitive information. The proposed solution has been implemented on the Java platform with CloudSim support, and the findings demonstrate a significant improvement in both detection accuracy and processing efficiency compared to existing methods. This research presents a comprehensive solution to the ongoing security challenges in cloud computing, offering a valuable contribution to the field.

RevDate: 2025-01-04

Ahmad SZ, F Qamar (2024)

A hybrid AI based framework for enhancing security in satellite based IoT networks using high performance computing architecture.

Scientific reports, 14(1):30695.

IoT device security has become a major concern as a result of the rapid expansion of the Internet of Things (IoT) and the growing adoption of cloud computing for central monitoring and management. In order to provide centrally managed services each IoT device have to connect to their respective High-Performance Computing (HPC) clouds. The ever increasing deployment of Internet of Things (IoT) devices linked to HPC clouds use various medium such as wired and wireless. The security challenges increases further when these devices communicate over satellite links. This Satellite-Based IoT-HPC Cloud architecture poses new security concerns which exacerbates this problem. An intrusion detection technology integrated in the central cloud is suggested as a potential remedy to monitor and detect aberrant activity within the network in order to allay these worries. However, the enormous amounts of data generated by IoT devices and their constrained computing power dose not allow to implement IDS techniques at source and renders towards typical central Intrusion Detection Systems (IDS) ineffectiveness. Moreover, to protect these systems, powerful intrusion detection techniques are required due to the inherent vulnerabilities of IoT devices and the possible hazards during data transmission.During the course of literature survey it is revealed that the research work has been done to detect few types of attacks by using the old school model of IDS. The computational expensiveness in terms of processing time is also an important parameter to be considered. This work introduces a novel Embedded Hybrid Deep Learning-based intrusion detection technique (EHID) based on embedded hybrid deep learning that is created specifically for IoT devices linked to HPC clouds via satellite connectivity. Two Deep Learning (DL) algorithms are integrated in the proposed method to improve detection abilities with decent accuracy while considering the processing time and number of trainable parameters to detect 14 types of threats. It segregates among the normal and attack traffic. We also modify the conventional IDS approach and propose architectural change to harness the processing power of central server of cloud. This hybrid approach effectively detects threats by harnessing the computing power available at HPC cloud along with leveraging the power of AI. Additionally, the proposed system enables real-time monitoring and detection of intrusions while providing monitoring and management services through HPC using IoT-generated data. Experiments on Edge-IIoTset Cyber Security Dataset of IoT & IIoT indicate improved detection accuracy, reduced false positives, and efficient computational performance.

RevDate: 2024-12-27

Salcedo E (2024)

Computer Vision-Based Gait Recognition on the Edge: A Survey on Feature Representations, Models, and Architectures.

Journal of imaging, 10(12): pii:jimaging10120326.

Computer vision-based gait recognition (CVGR) is a technology that has gained considerable attention in recent years due to its non-invasive, unobtrusive, and difficult-to-conceal nature. Beyond its applications in biometrics, CVGR holds significant potential for healthcare and human-computer interaction. Current CVGR systems often transmit collected data to a cloud server for machine learning-based gait pattern recognition. While effective, this cloud-centric approach can result in increased system response times. Alternatively, the emerging paradigm of edge computing, which involves moving computational processes to local devices, offers the potential to reduce latency, enable real-time surveillance, and eliminate reliance on internet connectivity. Furthermore, recent advancements in low-cost, compact microcomputers capable of handling complex inference tasks (e.g., Jetson Nano Orin, Jetson Xavier NX, and Khadas VIM4) have created exciting opportunities for deploying CVGR systems at the edge. This paper reports the state of the art in gait data acquisition modalities, feature representations, models, and architectures for CVGR systems suitable for edge computing. Additionally, this paper addresses the general limitations and highlights new avenues for future research in the promising intersection of CVGR and edge computing.

RevDate: 2025-01-04

Chen J, Hoops S, Mortveit HS, et al (2025)

Epihiper-A high performance computational modeling framework to support epidemic science.

PNAS nexus, 4(1):pgae557.

This paper describes Epihiper, a state-of-the-art, high performance computational modeling framework for epidemic science. The Epihiper modeling framework supports custom disease models, and can simulate epidemics over dynamic, large-scale networks while supporting modulation of the epidemic evolution through a set of user-programmable interventions. The nodes and edges of the social-contact network have customizable sets of static and dynamic attributes which allow the user to specify intervention target sets at a very fine-grained level; these also permit the network to be updated in response to nonpharmaceutical interventions, such as school closures. The execution of interventions is governed by trigger conditions, which are Boolean expressions formed using any of Epihiper's primitives (e.g. the current time, transmissibility) and user-defined sets (e.g. people with work activities). Rich expressiveness, extensibility, and high-performance computing responsiveness were central design goals to ensure that the framework could effectively target realistic scenarios at the scale and detail required to support the large computational designs needed by state and federal public health policymakers in their efforts to plan and respond in the event of epidemics. The modeling framework has been used to support the CDC Scenario Modeling Hub for COVID-19 response, and was a part of a hybrid high-performance cloud system that was nominated as a finalist for the 2021 ACM Gordon Bell Special Prize for high performance computing-based COVID-19 Research.

RevDate: 2025-01-04
CmpDate: 2024-12-19

Blindenbach J, Kang J, Hong S, et al (2024)

SQUiD: ultra-secure storage and analysis of genetic data for the advancement of precision medicine.

Genome biology, 25(1):314.

Cloud computing allows storing the ever-growing genotype-phenotype datasets crucial for precision medicine. Due to the sensitive nature of this data and varied laws and regulations, additional security measures are needed to ensure data privacy. We develop SQUiD, a secure queryable database for storing and analyzing genotype-phenotype data. SQUiD allows storage and secure querying of data in a low-security, low-cost public cloud using homomorphic encryption in a multi-client setting. We demonstrate SQUiD's practical usability and scalability using synthetic and UK Biobank data.

RevDate: 2025-01-04
CmpDate: 2024-12-17

Ma'moun S, Farag R, Abutaleb K, et al (2024)

Habitat Suitability Modelling for the Red Dwarf Honeybee (Apis florea (Linnaeus)) and Its Distribution Prediction Using Machine Learning and Cloud Computing.

Neotropical entomology, 54(1):18.

Apis florea bees were recently identified in Egypt, marking the second occurrence of this species on the African continent. The objective of this study was to track the distribution of A. florea in Egypt and evaluate its potential for invasive behaviour. Field surveys were conducted over a 2-year period, resulting in the collection of data on the spatial distribution of the red dwarf honeybees. A comprehensive analysis was performed utilizing long-term monthly temperature and rainfall data to generate spatially interpolated climate surfaces with a 1-km resolution. Vegetation variables derived from Terra MODIS were also incorporated. Furthermore, elevation data obtained from the Shuttle Radar Topography Mission were utilized to derive slope, aspect, and hillshade based on the digital elevation model. The collected data were subject to resampling for optimal data smoothing. Subsequently, a random forest model was applied, followed by an accuracy assessment to evaluate the classification output. The results indicated the selection of the mean temperature of coldest quarter (bio11), annual mean temperature (bio01), and minimum temperature of coldest month (bio06) as temperature-derived parameters are the most important parameters. Annual precipitation (bio12) and precipitation of wettest quarter (bio16) as precipitation parameters, and non-tree vegetation parameter as well as the elevation. The calculation of the Habitat Suitability Index revealed that the most suitable areas, covering a total of 200131.9 km[2], were predominantly situated in the eastern and northern regions of Egypt, including the Nile Delta characterized by its fertile agricultural lands and the presence of the river Nile. In contrast, the western and southern parts exhibited low habitat suitability due to the absence of significant green vegetation and low relative humidity.

RevDate: 2025-01-04

Zhou J, Chen S, Kuang H, et al (2024)

Optimal robust configuration in cloud environment based on heuristic optimization algorithm.

PeerJ. Computer science, 10:e2350.

To analyze performance in cloud computing, some unpredictable perturbations that may lead to performance degradation are essential factors that should not be neglected. To prevent performance degradation in cloud computing systems, it is reasonable to measure the impact of the perturbations and propose a robust configuration strategy to maintain the performance of the system at an acceptable level. In this article, unlike previous research focusing on profit maximization and waiting time minimization, our study starts with the bottom line of expected performance degradation due to perturbation. The bottom line is quantified as the minimum acceptable profit and the maximum acceptable waiting time, and then the corresponding feasible region is defined. By comparing between the system's actual working performance and the bottom line, the concept of robustness is invoked as a guiding basis for configuring server size and speed in feasible regions, so that the performance of the cloud computing system can be maintained at an acceptable level when perturbed. Subsequently, to improve the robustness of the system as much as possible, discuss the robustness measurement method. A heuristic optimization algorithm is proposed and compared with other heuristic optimization algorithms to verify the performance of the algorithm. Experimental results show that the magnitude error of the solution of our algorithm compared with the most advanced benchmark scheme is on the order of 10[-6], indicating the accuracy of our solution.

RevDate: 2024-12-13

Mou T, Y Liu (2024)

Utilizing the cloud-based satellite platform to explore the dynamics of coastal aquaculture ponds from 1986 to 2020 in Shandong Province, China.

Marine pollution bulletin, 211:117414 pii:S0025-326X(24)01391-2 [Epub ahead of print].

Coastal pond aqua farming is critical in aquaculture and significantly contributes to the seafood supply. Meanwhile, the development of aquaculture ponds also threatens vulnerable wetland resources and coastal ecosystems. Accurate statistics regarding the distribution and variability of coastal pond aquaculture are crucial for balancing the sustainable development of coastal aquaculture and preserving the coastal environment and ecosystems. Satellite imagery offers a valuable tool for detecting spatial-temporal information related to these coastal ponds. Furthermore, integrating multiple remote sensing images to acquire comprehensive spatial information about the coastal ponds remains challenging. This study utilized a decision-tree classifier applied to Landsat data to detect the spatial distribution of coastal ponds in Shandong Province from 1986 to 2020, with data analyzed at five-year intervals, primarily based on the Google Earth Engine cloud platform. A pond map in 2020, extracted from Sentinel-2 imagery, was used as a reference map and combined with the results from Landsat data to explore the landscape changes of coastal ponds. The results indicated that Shandong Province's coastal pond area underwent significant expansion before 1990, followed by slower growth from 1990 to 2010 and eventual shrinkage after 2010. Specifically, the pond area expanded from 428.38 km[2] in 1986 to a peak of 2149.51 km[2] in 2010 before contracting to 2012.39 km[2] in 2020. The region near Bohai Bay emerged as the epicenter of Shandong's coastal aquaculture, encompassing 62 % of the total pond area in 2020. The government policies previously promoted the expansion of coastal pond farming but shifted to curbing the uncontrolled development of aquaculture ponds.

RevDate: 2024-12-13
CmpDate: 2024-12-13

Alipio K, García-Colón J, Boscarino N, et al (2025)

Indigenous Data Sovereignty, Circular Systems, and Solarpunk Solutions for a Sustainable Future.

Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 30:717-733.

Recent advancements in Artificial Intelligence (AI) and data center infrastructure have brought the global cloud computing market to the forefront of conversations about sustainability and energy use. Current policy and infrastructure for data centers prioritize economic gain and resource extraction, inherently unsustainable models which generate massive amounts of energy and heat waste. Our team proposes the formation of policy around earth-friendly computation practices rooted in Indigenous models of circular systems of sustainability. By looking to alternative systems of sustainability rooted in Indigenous values of aloha 'āina, or love for the land, we find examples of traditional ecological knowledge (TEK) that can be imagined alongside Solarpunk visions for a more sustainable future. One in which technology works with the environment, reusing electronic waste (e-waste) and improving data life cycles.

RevDate: 2024-12-13
CmpDate: 2024-12-13

Ramwala OA, Lowry KP, Hippe DS, et al (2025)

ClinValAI: A framework for developing Cloud-based infrastructures for the External Clinical Validation of AI in Medical Imaging.

Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 30:215-228.

Artificial Intelligence (AI) algorithms showcase the potential to steer a paradigm shift in clinical medicine, especially medical imaging. Concerns associated with model generalizability and biases necessitate rigorous external validation of AI algorithms prior to their adoption into clinical workflows. To address the barriers associated with patient privacy, intellectual property, and diverse model requirements, we introduce ClinValAI, a framework for establishing robust cloud-based infrastructures to clinically validate AI algorithms in medical imaging. By featuring dedicated workflows for data ingestion, algorithm scoring, and output processing, we propose an easily customizable method to assess AI models and investigate biases. Our novel orchestration mechanism facilitates utilizing the complete potential of the cloud computing environment. ClinValAI's input auditing and standardization mechanisms ensure that inputs consistent with model prerequisites are provided to the algorithm for a streamlined validation. The scoring workflow comprises multiple steps to facilitate consistent inferencing and systematic troubleshooting. The output processing workflow helps identify and analyze samples with missing results and aggregates final outputs for downstream analysis. We demonstrate the usability of our work by evaluating a state-of-the-art breast cancer risk prediction algorithm on a large and diverse dataset of 2D screening mammograms. We perform comprehensive statistical analysis to study model calibration and evaluate performance on important factors, including breast density, age, and race, to identify latent biases. ClinValAI provides a holistic framework to validate medical imaging models and has the potential to advance the development of generalizable AI models in clinical medicine and promote health equity.

RevDate: 2024-12-15
CmpDate: 2024-12-13

Anderson W, Bhatnagar R, Scollick K, et al (2024)

Real-world evidence in the cloud: Tutorial on developing an end-to-end data and analytics pipeline using Amazon Web Services resources.

Clinical and translational science, 17(12):e70078.

In the rapidly evolving landscape of healthcare and drug development, the ability to efficiently collect, process, and analyze large volumes of real-world data (RWD) is critical for advancing drug development. This article provides a blueprint for establishing an end-to-end data and analytics pipeline in a cloud-based environment. The pipeline presented here includes four major components, including data ingestion, transformation, visualization, and analytics, each supported by a suite of Amazon Web Services (AWS) tools. The pipeline is exemplified through the CURE ID platform, a collaborative tool designed to capture and analyze real-world, off-label treatment administrations. By using services such as AWS Lambda, Amazon Relational Database Service (RDS), Amazon QuickSight, and Amazon SageMaker, the pipeline facilitates the ingestion of diverse data sources, the transformation of raw data into structured formats, the creation of interactive dashboards for data visualization, and the application of advanced machine learning models for data analytics. The described architecture not only supports the needs of the CURE ID platform, but also offers a scalable and adaptable framework that can be applied across various domains to enhance data-driven decision making beyond drug repurposing.

RevDate: 2024-12-14

Bao H, Yuan M, Deng H, et al (2024)

Secure multiparty computation protocol based on homomorphic encryption and its application in blockchain.

Heliyon, 10(14):e34458.

Blockchain technology is a key technology in the current information field and has been widely used in various industries. Blockchain technology faces significant challenges in privacy protection while ensuring data immutability and transparency, so it is crucial to implement private computing in blockchain. To target the privacy issues in blockchain, we design a secure multi-party computation (SMPC) protocol DHSMPC based on homomorphic encryption in this paper. On the one hand, homomorphic encryption technology can directly operate on ciphertext, solving the privacy problem in the blockchain. On the other hand, this paper designs the directed decryption function of DHSMPC to resist malicious opponents in the CRS model, so that authorized users who do not participate in the calculation can also access the decryption results of secure multi-party computation. Analytical and experimental results show that DHSMPC has smaller ciphertext size and stronger performance than existing SMPC protocols. The protocol makes it possible to implement complex calculations in multi-party scenarios and is proven to be resistant to various semi-malicious attacks, ensuring data security and privacy. Finally, this article combines the designed DHSMPC protocol with blockchain and cloud computing, showing how to use this solution to achieve trusted data management in specific scenarios.

RevDate: 2025-01-04
CmpDate: 2024-12-13

Oh S, Gravel-Pucillo K, Ramos M, et al (2024)

AnVILWorkflow: A runnable workflow package for Cloud-implemented bioinformatics analysis pipelines.

F1000Research, 13:1257.

Advancements in sequencing technologies and the development of new data collection methods produce large volumes of biological data. The Genomic Data Science Analysis, Visualization, and Informatics Lab-space (AnVIL) provides a cloud-based platform for democratizing access to large-scale genomics data and analysis tools. However, utilizing the full capabilities of AnVIL can be challenging for researchers without extensive bioinformatics expertise, especially for executing complex workflows. We present the AnVILWorkflow R package, which enables the convenient execution of bioinformatics workflows hosted on AnVIL directly from an R environment. AnVILWorkflow simplifies the setup of the cloud computing environment, input data formatting, workflow submission, and retrieval of results through intuitive functions. We demonstrate the utility of AnVILWorkflow for three use cases: bulk RNA-seq analysis with Salmon, metagenomics analysis with bioBakery, and digital pathology image processing with PathML. The key features of AnVILWorkflow include user-friendly browsing of available data and workflows, seamless integration of R and non-R tools within a reproducible analysis pipeline, and accessibility to scalable computing resources without direct management overhead. AnVILWorkflow lowers the barrier to utilizing AnVIL's resources, especially for exploratory analyses or bulk processing with established workflows. This empowers a broader community of researchers to leverage the latest genomics tools and datasets using familiar R syntax. This package is distributed through the Bioconductor project (https://bioconductor.org/packages/AnVILWorkflow), and the source code is available through GitHub (https://github.com/shbrief/AnVILWorkflow).

RevDate: 2024-12-14
CmpDate: 2024-12-12

Bano S, Abbas G, Bilal M, et al (2024)

PHyPO: Priority-based Hybrid task Partitioning and Offloading in mobile computing using automated machine learning.

PloS one, 19(12):e0314198.

With the increasing demand for mobile computing, the requirement for intelligent resource management has also increased. Cloud computing lessens the energy consumption of user equipment, but it increases the latency of the system. Whereas edge computing reduces the latency along with the energy consumption, it has limited resources and cannot process bigger tasks. To resolve these issues, a Priority-based Hybrid task Partitioning and Offloading (PHyPO) scheme is introduced in this paper, which prioritizes the tasks with high time sensitivity and offloads them intelligently. It also calculates the optimal number of partitions a task can be divided into. The utility of resources is maximized along with increasing the processing capability of the model by using a hybrid architecture, consisting of mobile devices, edge servers, and cloud servers. Automated machine learning is used to identify the optimal classification models, along with tuning their hyper-parameters, which results in adaptive boosting ensemble learning-based models to reduce the time complexity of the system to O(1). The results of the proposed algorithm show a significant improvement over benchmark techniques along with achieving an accuracy of 96.1% for the optimal partitioning model and 94.3% for the optimal offloading model, with both the results being achieved in significantly less or equal time as compared to the benchmark techniques.

RevDate: 2024-12-13

Katapally TR (2024)

It's late, but not too late to transform health systems: a global digital citizen science observatory for local solutions to global problems.

Frontiers in digital health, 6:1399992.

A key challenge in monitoring, managing, and mitigating global health crises is the need to coordinate clinical decision-making with systems outside of healthcare. In the 21st century, human engagement with Internet-connected ubiquitous devices generates an enormous amount of big data, which can be used to address complex, intersectoral problems via participatory epidemiology and mHealth approaches that can be operationalized with digital citizen science. These big data - which traditionally exist outside of health systems - are underutilized even though their usage can have significant implications for prediction and prevention of communicable and non-communicable diseases. To address critical challenges and gaps in big data utilization across sectors, a Digital Citizen Science Observatory (DiScO) is being developed by the Digital Epidemiology and Population Health Laboratory by scaling up existing digital health infrastructure. DiScO's development is informed by the Smart Framework, which leverages ubiquitous devices for ethical surveillance. The Observatory will be operationalized by implementing a rapidly adaptable, replicable, and scalable progressive web application that repurposes jurisdiction-specific cloud infrastructure to address crises across jurisdictions. The Observatory is designed to be highly adaptable for both rapid data collection as well as rapid responses to emerging and existing crises. Data sovereignty and decentralization of technology are core aspects of the observatory, where citizens can own the data they generate, and researchers and decision-makers can re-purpose digital health infrastructure. The ultimate aim of DiScO is to transform health systems by breaking existing jurisdictional silos in addressing global health crises.

RevDate: 2024-12-14

Parente L, Sloat L, Mesquita V, et al (2024)

Annual 30-m maps of global grassland class and extent (2000-2022) based on spatiotemporal Machine Learning.

Scientific data, 11(1):1303.

The paper describes the production and evaluation of global grassland extent mapped annually for 2000-2022 at 30 m spatial resolution. The dataset showing the spatiotemporal distribution of cultivated and natural/semi-natural grassland classes was produced by using GLAD Landsat ARD-2 image archive, accompanied by climatic, landform and proximity covariates, spatiotemporal machine learning (per-class Random Forest) and over 2.3 M reference samples (visually interpreted in Very High Resolution imagery). Custom probability thresholds (based on five-fold spatial cross-validation) were used to derive dominant class maps with balanced user's and producer's accuracy, resulting in f1 score of 0.64 and 0.75 for cultivated and natural/semi-natural grassland, respectively. The produced maps (about 4 TB in size) are available under an open data license as Cloud-Optimized GeoTIFFs and as Google Earth Engine assets. The suggested uses of data include (1) integration with other compatible land cover products and (2) tracking the intensity and drivers of conversion of land to cultivated grasslands and from natural / semi-natural grasslands into other land use systems.

RevDate: 2025-01-04
CmpDate: 2024-12-17

Truong V, Moore JE, Ricoy UM, et al (2024)

Low-Cost Approaches in Neuroscience to Teach Machine Learning Using a Cockroach Model.

eNeuro, 11(12):.

In an effort to increase access to neuroscience education in underserved communities, we created an educational program that utilizes a simple task to measure place preference of the cockroach (Gromphadorhina portentosa) and the open-source free software, SLEAP Estimates Animal Poses (SLEAP) to quantify behavior. Cockroaches (n = 18) were trained to explore a linear track for 2 min while exposed to either air, vapor, or vapor with nicotine from a port on one side of the linear track over 14 d. The time the animal took to reach the port was measured, along with distance traveled, time spent in each zone, and velocity. As characterizing behavior is challenging and inaccessible for nonexperts new to behavioral research, we created an educational program using the machine learning algorithm, SLEAP, and cloud-based (i.e., Google Colab) low-cost platforms for data analysis. We found that SLEAP was within a 0.5% margin of error when compared with manually scoring the data. Cockroaches were found to have an increased aversive response to vapor alone compared with those that only received air. Using SLEAP, we demonstrate that the x-y coordinate data can be further classified into behavior using dimensionality-reducing clustering methods. This suggests that the linear track can be used to examine nicotine preference for the cockroach, and SLEAP can provide a fast, efficient way to analyze animal behavior. Moreover, this educational program is available for free for students to learn a complex machine learning algorithm without expensive hardware to study animal behavior.

RevDate: 2024-12-11
CmpDate: 2024-12-09

Consoli D, Parente L, Simoes R, et al (2024)

A computational framework for processing time-series of earth observation data based on discrete convolution: global-scale historical Landsat cloud-free aggregates at 30 m spatial resolution.

PeerJ, 12:e18585.

Processing large collections of earth observation (EO) time-series, often petabyte-sized, such as NASA's Landsat and ESA's Sentinel missions, can be computationally prohibitive and costly. Despite their name, even the Analysis Ready Data (ARD) versions of such collections can rarely be used as direct input for modeling because of cloud presence and/or prohibitive storage size. Existing solutions for readily using these data are not openly available, are poor in performance, or lack flexibility. Addressing this issue, we developed TSIRF (Time-Series Iteration-free Reconstruction Framework), a computational framework that can be used to apply diverse time-series processing tasks, such as temporal aggregation and time-series reconstruction by simply adjusting the convolution kernel. As the first large-scale application, TSIRF was employed to process the entire Global Land Analysis and Discovery (GLAD) ARD Landsat archive, producing a cloud-free bi-monthly aggregated product. This process, covering seven Landsat bands globally from 1997 to 2022, with more than two trillion pixels and for each one a time-series of 156 samples in the aggregated product, required approximately 28 hours of computation using 1248 Intel[®] Xeon[®] Gold 6248R CPUs. The quality of the result was assessed using a benchmark dataset derived from the aggregated product and comparing different imputation strategies. The resulting reconstructed images can be used as input for machine learning models or to map biophysical indices. To further limit the storage size the produced data was saved as 8-bit Cloud-Optimized GeoTIFFs (COG). With the hosting of about 20 TB per band/index for an entire 30 m resolution bi-monthly historical time-series distributed as open data, the product enables seamless, fast, and affordable access to the Landsat archive for environmental monitoring and analysis applications.

RevDate: 2024-12-11

Chen H, F Al-Turjman (2024)

Cloud-based configurable data stream processing architecture in rural economic development.

PeerJ. Computer science, 10:e2547.

PURPOSE: This study aims to address the limitations of traditional data processing methods in predicting agricultural product prices, which is essential for advancing rural informatization to enhance agricultural efficiency and support rural economic growth.

METHODOLOGY: The RL-CNN-GRU framework combines reinforcement learning (RL), convolutional neural network (CNN), and gated recurrent unit (GRU) to improve agricultural price predictions using multidimensional time series data, including historical prices, weather, soil conditions, and other influencing factors. Initially, the model employs a 1D-CNN for feature extraction, followed by GRUs to capture temporal patterns in the data. Reinforcement learning further optimizes the model, enhancing the analysis and accuracy of multidimensional data inputs for more reliable price predictions.

RESULTS: Testing on public and proprietary datasets shows that the RL-CNN-GRU framework significantly outperforms traditional models in predicting prices, with lower mean squared error (MSE) and mean absolute error (MAE) metrics.

CONCLUSION: The RL-CNN-GRU framework contributes to rural informatization by offering a more accurate prediction tool, thereby supporting improved decision-making in agricultural processes and fostering rural economic development.

RevDate: 2024-12-11

Ur Rehman A, Lu S, Ashraf MA, et al (2024)

The role of Internet of Things (IoT) technology in modern cultivation for the implementation of greenhouses.

PeerJ. Computer science, 10:e2309.

In recent years, the Internet of Things (IoT) has become one of the most familiar names creating a benchmark and scaling new heights. IoT an indeed future of the communication that has transformed the objects (things) of the real world into smarter devices. With the advent of IoT technology, this decade is witnessing a transformation from traditional agriculture approaches to the most advanced ones. Limited research has been carried out in this direction. Thus, herein we present various technological aspects involved in IoT-based cultivation. The role and the key components of smart farming using IoT were examined, with a focus on network technologies, including layers, protocols, topologies, network architecture, etc. We also delve into the integration of relevant technologies such as cloud computing, big data analytics, and the integration of IoT-based cultivation. We explored various security issues in modern IoT cultivation and also emphasized the importance of safeguarding sensitive agricultural data. Additionally, a comprehensive list of applications based on sensors and mobile devices is provided, offering refined solutions for greenhouse management. The principles and regulations established by different countries for IoT-based cultivation systems are presented, demonstrating the global recognition of these technologies. Furthermore, a selection of successful use cases and real-world scenarios and applications were presented. Finally, the open research challenges and solutions in modern IoT-based cultivation were discussed.

RevDate: 2024-12-11

Akram A, Anjum F, Latif S, et al (2024)

Honey bee inspired resource allocation scheme for IoT-driven smart healthcare applications in fog-cloud paradigm.

PeerJ. Computer science, 10:e2484.

The Internet of Things (IoT) paradigm is a foundational and integral factor for the development of smart applications in different sectors. These applications are comprised over set of interconnected modules that exchange data and realize the distributed data flow (DDF) model. The execution of these modules on distant cloud data-center is prone to quality of service (QoS) degradation. This is where fog computing philosophy comes in to bridge this gap and bring the computation closer to the IoT devices. However, resource management in fog and optimal allocation of fog devices to application modules is critical for better resource utilization and achieve QoS. Significant challenge in this regard is to manage the fog network dynamically to determine cost effective placement of application modules on resources. In this study, we propose the optimal placement strategy for smart health-care application modules on fog resources. The objective of this strategy is to ensure optimal execution in terms of latency, bandwidth and earliest completion time as compared to few baseline techniques. A honey bee inspired strategy has been proposed for allocation and utilization of the resource for application module processing. In order to model the application and measure the effectiveness of our strategy, iFogSim Java-based simulation classes have been extended and conduct the experiments that demonstrate the satisfactory results.

RevDate: 2024-12-11

Balaji P, Cengiz K, Babu S, et al (2024)

Metaheuristic optimized complex-valued dilated recurrent neural network for attack detection in internet of vehicular communications.

PeerJ. Computer science, 10:e2366.

The Internet of Vehicles (IoV) is a specialized iteration of the Internet of Things (IoT) tailored to facilitate communication and connectivity among vehicles and their environment. It harnesses the power of advanced technologies such as cloud computing, wireless communication, and data analytics to seamlessly exchange real-time data among vehicles, road-side infrastructure, traffic management systems, and other entities. The primary objectives of this real-time data exchange include enhancing road safety, reducing traffic congestion, boosting traffic flow efficiency, and enriching the driving experience. Through the IoV, vehicles can share information about traffic conditions, weather forecasts, road hazards, and other relevant data, fostering smarter, safer, and more efficient transportation networks. Developing, implementing and maintaining sophisticated techniques for detecting attacks present significant challenges and costs, which might limit their deployment, especially in smaller settings or those with constrained resources. To overcome these drawbacks, this article outlines developing an innovative attack detection model for the IoV using advanced deep learning techniques. The model aims to enhance security in vehicular networks by efficiently identifying attacks. Initially, data is collected from online databases and subjected to an optimal feature extraction process. During this phase, the Enhanced Exploitation in Hybrid Leader-based Optimization (EEHLO) method is employed to select the optimal features. These features are utilized by a Complex-Valued Dilated Recurrent Neural Network (CV-DRNN) to detect attacks within vehicle networks accurately. The performance of this novel attack detection model is rigorously evaluated and compared with that of traditional models using a variety of metrics.

RevDate: 2024-12-07

Ojha S, Paygude P, Dhumane A, et al (2024)

A method to enhance privacy preservation in cloud storage through a three-layer scheme for computational intelligence in fog computing.

MethodsX, 13:103053.

Recent advancements in cloud computing have heightened concerns about data control and privacy due to vulnerabilities in traditional encryption methods, which may not withstand internal attacks from cloud servers. To overcome these issues about the data privacy and control of transfer on cloud, a novel three-tier storage model incorporating fog computing method has been proposed. This framework leverages the advantages of cloud storage while enhancing data privacy. The approach uses the Hash-Solomon code algorithm to partition data into distinct segments, distributing a portion of it across local machines and fog servers, in addition to cloud storage. This distribution not only increases data privacy but also optimises storage efficiency. Computational intelligence plays a crucial role by calculating the optimal data distribution across cloud, fog, and local servers, ensuring balanced and secure data storage.•Experimental analysis of this mathematical mode has demonstrated a significant improvement in storage efficiency, with increases ranging from 30 % to 40 % as the volume of data blocks grows.•This innovative framework based on Hash Solomon code method effectively addresses privacy concerns while maintaining the benefits of cloud computing, offering a robust solution for secure and efficient data management.

RevDate: 2024-12-08
CmpDate: 2024-12-05

Valderrama-Landeros L, Troche-Souza C, Alcántara-Maya JA, et al (2024)

An assessment of mangrove forest in northwestern Mexico using the Google Earth Engine cloud computing platform.

PloS one, 19(12):e0315181.

Mangrove forests are commonly mapped using spaceborne remote sensing data due to the challenges of field endeavors in such harsh environments. However, these methods usually require a substantial level of manual processing for each image. Hence, conservation practitioners prioritize using cloud computing platforms to obtain accurate canopy classifications of large extensions of mangrove forests. The objective of this study was to analyze the spatial distribution and rate of change (area gain and loss) of the red mangrove (Rhizophora mangle) and other dominant mangrove species, mainly Avicennia germinans and Laguncularia racemosa, between 2015 and 2020 throughout the northwestern coast of Mexico. Bimonthly data of the Combined Mangrove Recognition Index (CMRI) from all available Sentinel-2 data were processed with the Google Earth Engine cloud computing platform. The results indicated an extension of 42865 ha of red mangrove and 139602 ha of other dominant mangrove species in the Gulf of California and the Pacific northwestern coast of Mexico for 2020. The mangrove extension experienced a notable decline of 1817 ha from 2015 to 2020, largely attributed to the expansion of aquaculture ponds and the destructive effects of hurricanes. Considering the two mangrove classes, the overall classification accuracies were 90% and 92% for the 2015 and 2020 maps, respectively. The advantages of the method compared to supervised classifications and traditional vegetation indices are discussed, as are the disadvantages concerning the spatial resolution and the minimum detection area. The work is a national effort to assist in decision-making to prioritize resource allocations for blue carbon, rehabilitation, and climate change mitigation programs.

RevDate: 2024-12-07
CmpDate: 2024-12-05

Khan M, Chao W, Rahim M, et al (2024)

Enhancing green supplier selection: A nonlinear programming method with TOPSIS in cubic Pythagorean fuzzy contexts.

PloS one, 19(12):e0310956.

The advancements in information and communication technologies have given rise to innovative developments such as cloud computing, the Internet of Things, big data analytics, and artificial intelligence. These technologies have been integrated into production systems, transforming them into intelligent systems and significantly impacting the supplier selection process. In recent years, the integration of these cutting-edge technologies with traditional and environmentally conscious criteria has gained considerable attention in supplier selection. This paper introduces a novel Nonlinear Programming (NLP) approach that utilizes the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method to identify the most suitable green supplier within cubic Pythagorean fuzzy (CPF) environments. Unlike existing methods that use either interval-valued PFS (IVPFS) or Pythagorean fuzzy sets (PFS) to represent information, our approach employs cubic Pythagorean fuzzy sets (CPFS), effectively addressing both IVPFS and PFS simultaneously. The proposed NLP models leverage interval weights, relative closeness coefficients (RCC), and weighted distance measurements to tackle complex decision-making problems. To illustrate the accuracy and effectiveness of the proposed selection methodology, we present a real-world case study related to green supplier selection.

RevDate: 2025-01-10
CmpDate: 2025-01-10

Corrêa Veríssimo G, Salgado Ferreira R, V Gonçalves Maltarollo (2025)

Ultra-Large Virtual Screening: Definition, Recent Advances, and Challenges in Drug Design.

Molecular informatics, 44(1):e202400305.

Virtual screening (VS) in drug design employs computational methodologies to systematically rank molecules from a virtual compound library based on predicted features related to their biological activities or chemical properties. The recent expansion in commercially accessible compound libraries and the advancements in artificial intelligence (AI) and computational power - including enhanced central processing units (CPUs), graphics processing units (GPUs), high-performance computing (HPC), and cloud computing - have significantly expanded our capacity to screen libraries containing over 10[9] molecules. Herein, we review the concept of ultra-large virtual screening (ULVS), focusing on the various algorithms and methodologies employed for virtual screening at this scale. In this context, we present the software utilized, applications, and results of different approaches, such as brute force docking, reaction-based docking approaches, machine learning (ML) strategies applied to docking or other VS methods, and similarity/pharmacophore search-based techniques. These examples represent a paradigm shift in the drug discovery process, demonstrating not only the feasibility of billion-scale compound screening but also their potential to identify hit candidates and increase the structural diversity of novel compounds with biological activities.

RevDate: 2024-12-07
CmpDate: 2024-12-05

Prasad VK, Verma A, Bhattacharya P, et al (2024)

Revolutionizing healthcare: a comparative insight into deep learning's role in medical imaging.

Scientific reports, 14(1):30273.

Recently, Deep Learning (DL) models have shown promising accuracy in analysis of medical images. Alzeheimer Disease (AD), a prevalent form of dementia, uses Magnetic Resonance Imaging (MRI) scans, which is then analysed via DL models. To address the model computational constraints, Cloud Computing (CC) is integrated to operate with the DL models. Recent articles on DL-based MRI have not discussed datasets specific to different diseases, which makes it difficult to build the specific DL model. Thus, the article systematically explores a tutorial approach, where we first discuss a classification taxonomy of medical imaging datasets. Next, we present a case-study on AD MRI classification using the DL methods. We analyse three distinct models-Convolutional Neural Networks (CNN), Visual Geometry Group 16 (VGG-16), and an ensemble approach-for classification and predictive outcomes. In addition, we designed a novel framework that offers insight into how various layers interact with the dataset. Our architecture comprises an input layer, a cloud-based layer responsible for preprocessing and model execution, and a diagnostic layer that issues alerts after successful classification and prediction. According to our simulations, CNN outperformed other models with a test accuracy of 99.285%, followed by VGG-16 with 85.113%, while the ensemble model lagged with a disappointing test accuracy of 79.192%. Our cloud Computing framework serves as an efficient mechanism for medical image processing while safeguarding patient confidentiality and data privacy.

RevDate: 2024-12-05

Tang H, Kong L, Fang Z, et al (2024)

Sustainable and smart rail transit based on advanced self-powered sensing technology.

iScience, 27(12):111306.

As rail transit continues to develop, expanding railway networks increase the demand for sustainable energy supply and intelligent infrastructure management. In recent years, advanced rail self-powered technology has rapidly progressed toward artificial intelligence and the internet of things (AIoT). This review primarily discusses the self-powered and self-sensing systems in rail transit, analyzing their current characteristics and innovative potentials in different scenarios. Based on this analysis, we further explore an IoT framework supported by sustainable self-powered sensing systems including device nodes, network communication, and platform deployment. Additionally, technologies about cloud computing and edge computing deployed in railway IoT enable more effective utilization. The deployed intelligent algorithms such as machine learning (ML) and deep learning (DL) can provide comprehensive monitoring, management, and maintenance in railway environments. Furthermore, this study explores research in other cross-disciplinary fields to investigate the potential of emerging technologies and analyze the trends for future development in rail transit.

RevDate: 2024-12-05
CmpDate: 2024-12-03

Asim Shahid M, Alam MM, M Mohd Su'ud (2024)

A fact based analysis of decision trees for improving reliability in cloud computing.

PloS one, 19(12):e0311089.

The popularity of cloud computing (CC) has increased significantly in recent years due to its cost-effectiveness and simplified resource allocation. Owing to the exponential rise of cloud computing in the past decade, many corporations and businesses have moved to the cloud to ensure accessibility, scalability, and transparency. The proposed research involves comparing the accuracy and fault prediction of five machine learning algorithms: AdaBoostM1, Bagging, Decision Tree (J48), Deep Learning (Dl4jMLP), and Naive Bayes Tree (NB Tree). The results from secondary data analysis indicate that the Central Processing Unit CPU-Mem Multi classifier has the highest accuracy percentage and the least amount of fault prediction. This holds for the Decision Tree (J48) classifier with an accuracy rate of 89.71% for 80/20, 90.28% for 70/30, and 92.82% for 10-fold cross-validation. Additionally, the Hard Disk Drive HDD-Mono classifier has an accuracy rate of 90.35% for 80/20, 92.35% for 70/30, and 90.49% for 10-fold cross-validation. The AdaBoostM1 classifier was found to have the highest accuracy percentage and the least amount of fault prediction for the HDD Multi classifier with an accuracy rate of 93.63% for 80/20, 90.09% for 70/30, and 88.92% for 10-fold cross-validation. Finally, the CPU-Mem Mono classifier has an accuracy rate of 77.87% for 80/20, 77.01% for 70/30, and 77.06% for 10-fold cross-validation. Based on the primary data results, the Naive Bayes Tree (NB Tree) classifier is found to have the highest accuracy rate with less fault prediction of 97.05% for 80/20, 96.09% for 70/30, and 96.78% for 10 folds cross-validation. However, the algorithm complexity is not good, taking 1.01 seconds. On the other hand, the Decision Tree (J48) has the second-highest accuracy rate of 96.78%, 95.95%, and 96.78% for 80/20, 70/30, and 10-fold cross-validation, respectively. J48 also has less fault prediction but with a good algorithm complexity of 0.11 seconds. The difference in accuracy and less fault prediction between NB Tree and J48 is only 0.9%, but the difference in time complexity is 9 seconds. Based on the results, we have decided to make modifications to the Decision Tree (J48) algorithm. This method has been proposed as it offers the highest accuracy and less fault prediction errors, with 97.05% accuracy for the 80/20 split, 96.42% for the 70/30 split, and 97.07% for the 10-fold cross-validation.

RevDate: 2024-12-07
CmpDate: 2024-12-03

Hegde A, Vijaysenan D, Mandava P, et al (2024)

The use of cloud based machine learning to predict outcome in intracerebral haemorrhage without explicit programming expertise.

Neurosurgical review, 47(1):883.

Machine Learning (ML) techniques require novel computer programming skills along with clinical domain knowledge to produce a useful model. We demonstrate the use of a cloud-based ML tool that does not require any programming expertise to develop, validate and deploy a prognostic model for Intracerebral Haemorrhage (ICH). The data of patients admitted with Spontaneous Intracerebral haemorrhage from January 2015 to December 2019 was accessed from our prospectively maintained hospital stroke registry. 80% of the dataset was used for training, 10% for validation, and 10% for testing. Seventeen input variables were used to predict the dichotomized outcomes (Good outcome mRS 0-3/ Bad outcome mRS 4-6), using machine learning (ML) and logistic regression (LR) models. The two different approaches were evaluated using Area Under the Curve (AUC) for Receiver Operating Characteristic (ROC), Precision recall and accuracy. Our data set comprised of a cohort of 1000 patients. The data was split 8:1 for training & testing respectively. The AUC ROC of the ML model was 0.86 with an accuracy of 75.7%. With LR AUC ROC was 0.74 with an accuracy of 73.8%. Feature importance chart showed that Glasgow coma score (GCS) at presentation had the highest relative importance, followed by hematoma volume and age in both approaches. Machine learning models perform better when compared to logistic regression. Models can be developed by clinicians possessing domain expertise and no programming experience using cloud based tools. The models so developed lend themselves to be incorporated into clinical workflow.

RevDate: 2024-12-05

Bhakhar R, RS Chhillar (2024)

Dynamic multi-criteria scheduling algorithm for smart home tasks in fog-cloud IoT systems.

Scientific reports, 14(1):29957.

The proliferation of Internet of Things (IoT) devices in smart homes has created a demand for efficient computational task management across complex networks. This paper introduces the Dynamic Multi-Criteria Scheduling (DMCS) algorithm, designed to enhance task scheduling in fog-cloud computing environments for smart home applications. DMCS dynamically allocates tasks based on criteria such as computational complexity, urgency, and data size, ensuring that time-sensitive tasks are processed swiftly on fog nodes while resource-intensive computations are handled by cloud data centers. The implementation of DMCS demonstrates significant improvements over conventional scheduling algorithms, reducing makespan, operational costs, and energy consumption. By effectively balancing immediate and delayed task execution, DMCS enhances system responsiveness and overall computational efficiency in smart home environments. However, DMCS also faces limitations, including computational overhead and scalability issues in larger networks. Future research will focus on integrating advanced machine learning algorithms to refine task classification, enhancing security measures, and expanding the framework's applicability to various computing environments. Ultimately, DMCS aims to provide a robust and adaptive scheduling solution capable of meeting the complex requirements of modern IoT ecosystems and improving the efficiency of smart homes.

RevDate: 2024-12-02
CmpDate: 2024-12-02

Wang H, Kong X, Phewnil O, et al (2024)

Spatiotemporal prediction of alpine wetlands under multi-climate scenarios in the west of Sichuan, China.

PeerJ, 12:e18586.

BACKGROUND: The alpine wetlands in western Sichuan are distributed along the eastern section of the Qinghai-Tibet Plateau (QTP), where the ecological environment is fragile and highly sensitive to global climate change. These wetlands are already experiencing severe ecological and environmental issues, such as drought, retrogressive succession, and desertification. However, due to the limitations of computational models, previous studies have been unable to adequately understand the spatiotemporal change trends of these alpine wetlands.

METHODS: We employed a large sample and composite supervised classification algorithms to classify alpine wetlands and generate wetland maps, based on the Google Earth Engine cloud computing platform. The thematic maps were then grid-sampled for predictive modeling of future wetland changes. Four species distribution models (SDMs), BIOCLIM, DOMAIN, MAXENT, and GARP were innovatively introduced. Using the WorldClim dataset as environmental variables, we predicted the future distribution of wetlands in western Sichuan under multiple climate scenarios.

RESULTS: The Kappa coefficients for Landsat 8 and Sentinel 2 were 0.89 and 0.91, respectively. Among the four SDMs, MAXENT achieved a higher accuracy (α = 91.6%) for the actual wetland compared to the thematic overlay analysis. The area under the curve (AUC) of the MAXENT model simulations for wetland spatial distribution were all greater than 0.80. This suggests that incorporating the SDM model into land change simulations has high generalizability and significant advantages on a large scale. Furthermore, simulation results reveal that between 2021 and 2100 years, with increasing emission concentrations, highly suitable areas for wetland development exhibit significant spatial differentiation. In particular, wetland areas in high-altitude regions are expected to increase, while low-altitude regions will markedly shrink. The changes in the future spatial distribution of wetlands show a high level of consistency with historical climate changes, with warming being the main driving force behind the spatiotemporal changes in alpine wetlands in western Sichuan, especially evident in the central high-altitude and northern low-altitude areas.

RevDate: 2024-12-03
CmpDate: 2024-11-30

Wu SH, TA Mueller (2024)

A user-friendly NoSQL framework for managing agricultural field trial data.

Scientific reports, 14(1):29819.

Field trials are one of the essential stages in agricultural product development, enabling the validation of products in real-world environments rather than controlled laboratory or greenhouse settings. With the advancement in technologies, field trials often collect a large amount of information with diverse data types from various sources. Managing and organizing extensive datasets can impose challenges for small research teams, especially with constantly evolving data collection processes with multiple collaborators and introducing new data types between studies. A practical database needs to be able to incorporate all these changes seamlessly. We present DynamoField, a flexible database framework for collecting and analyzing field trial data. The backend database for DynamoField is powered by Amazon Web Services DynamoDB, a NoSQL database, and DynamoField also provides a front-end interactive web interface. With the flexibility of the NoSQL database, researchers can modify the database schema based on the data provided by various collaborators and contract research organizations. This framework includes functions for non-technical users, including importing and exporting data, data integration and manipulation, and performing statistical analysis. Researchers can utilize cloud computing to establish a secure NoSQL database with minimum maintenance, this also enables collaboration with others worldwide and adapt to different data-collecting strategies as their research progresses. DynamoField is implemented in Python, and it is publicly available at https://github.com/ComputationalAgronomy/DynamoField .

RevDate: 2024-11-28
CmpDate: 2024-11-28

Hillebrand FL, Prieto JD, Mendes Júnior CW, et al (2024)

Gray Level Co-occurrence Matrix textural analysis for temporal mapping of sea ice in Sentinel-1A SAR images.

Anais da Academia Brasileira de Ciencias, 96(suppl 2):e20240554 pii:S0001-37652024000401106.

Sea ice is a critical component of the cryosphere and plays a role in the heat and moisture exchange processes between the ocean and atmosphere, thus regulating the global climate. With climate change, detailed monitoring of changes occurring in sea ice is necessary. Therefore, an analysis was conducted to evaluate the potential of using the Gray Level Co-occurrence Matrix (GLCM) texture analysis combined with the backscattering coefficient (σ°) of HH polarization in Sentinel-1A Synthetic Aperture Radar (SAR) images, interferometric imaging mode, for mapping sea ice in time series. Data processing was performed using cloud computing on the Google Earth Engine platform with routines written in JavaScript. To train the Random Forest (RF) classifier, samples of regions with open water and sea ice were obtained through visual interpretation of false-color SAR images from Sentinel-1B in the extra-wide swath imaging mode. The analysis demonstrated that training samples used in the RF classifier from a specific date can be applied to images from other dates within the freezing period, achieving accuracies ≥ 90% when using 64-bit grayscale quantization in GLCM combined with σ° data. However, when using only σ° data in the RF classifier, accuracies ≥ 93% were observed.

RevDate: 2024-12-09

Ricotta EE, Bents S, Lawler B, et al (2024)

Search interest in alleged COVID-19 treatments over the pandemic period: the impact of mass news media.

medRxiv : the preprint server for health sciences.

BACKGROUND: Understanding how individuals obtain medical information, especially amid changing guidance, is important for improving outreach and communication strategies. In particular, during a public health emergency, interest in unsafe or illegitimate medications can delay access to appropriate treatments and foster mistrust in the medical system, which can be detrimental at both individual and population levels. It is thus key to understand factors associated with said interest.

METHODS: We obtained US-based Google Search Trends and Media Cloud data from 2019-2022 to assess the relationship between Internet search interest and media coverage in three purported COVID-19 treatments: hydroxychloroquine, ivermectin, and remdesivir. We first conducted anomaly detection in the treatment-specific search interest data to detect periods of interest above pre-pandemic baseline; we then used multilevel negative binomial regression-controlling for political leaning, rurality, and social vulnerability-to test for associations between treatment-specific search interest and media coverage.

FINDINGS: We observed that interest in hydroxychloroquine and remdesivir peaked early in 2020 and then subsided, while peak interest in ivermectin occurred later but was more sustained. We detected significant associations between media coverage and search interest for all three treatments. The strongest association was observed for ivermectin, in which a single standard deviation increase in media coverage was associated with more than double the search interest (164%, 95% CI: 148, 180), compared to a 109% increase (95% CI: 101, 118) for hydroxychloroquine and a 49% increase (95% CI: 43, 55) for remdesivir.

INTERPRETATION: Search interest in purported COVID-19 treatments was significantly associated with contemporaneous media coverage, with the highest impact on interest in ivermectin, a treatment demonstrated to be ineffectual for treating COVID-19 and potentially dangerous if used inappropriately.

FUNDING: This work was funded in part by the US National Institutes of Health and the US National Science Foundation.

RevDate: 2024-11-28

G C S, Koparan C, Upadhyay A, et al (2024)

A novel automated cloud-based image datasets for high throughput phenotyping in weed classification.

Data in brief, 57:111097.

Deep learning-based weed detection data management involves data acquisition, data labeling, model development, and model evaluation phases. Out of these data management phases, data acquisition and data labeling are labor-intensive and time-consuming steps for building robust models. In addition, low temporal variation of crop and weed in the datasets is one of the limiting factors for effective weed detection model development. This article describes the cloud-based automatic data acquisition system (CADAS) to capture the weed and crop images in fixed time intervals to take plant growth stages into account for weed identification. The CADAS was developed by integrating fifteen digital cameras in the visible spectrum with gphoto2 libraries, external storage, cloud storage, and a computer with Linux operating system. Dataset from CADAS system contain six weed species and eight crop species for weed and crop detection. A dataset of 2000 images per weed and crop species was publicly released. Raw RGB images underwent a cropping process guided by bounding box annotations to generate individual JPG images for crop and weed instances. In addition to cropped image 200 raw images with label files were released publicly. This dataset hold potential for investigating challenges in deep learning-based weed and crop detection in agricultural settings. Additionally, this data could be used by researcher along with field data to boost the model performance by reducing data imbalance problem.

RevDate: 2025-01-04

Geng J, Voitiuk K, Parks DF, et al (2024)

Multiscale Cloud-Based Pipeline for Neuronal Electrophysiology Analysis and Visualization.

bioRxiv : the preprint server for biology.

Electrophysiology offers a high-resolution method for real-time measurement of neural activity. Longitudinal recordings from high-density microelectrode arrays (HD-MEAs) can be of considerable size for local storage and of substantial complexity for extracting neural features and network dynamics. Analysis is often demanding due to the need for multiple software tools with different runtime dependencies. To address these challenges, we developed an open-source cloud-based pipeline to store, analyze, and visualize neuronal electrophysiology recordings from HD-MEAs. This pipeline is dependency agnostic by utilizing cloud storage, cloud computing resources, and an Internet of Things messaging protocol. We containerized the services and algorithms to serve as scalable and flexible building blocks within the pipeline. In this paper, we applied this pipeline on two types of cultures, cortical organoids and ex vivo brain slice recordings to show that this pipeline simplifies the data analysis process and facilitates understanding neuronal activity.

RevDate: 2024-12-07

Papudeshi B, Roach MJ, Mallawaarachchi V, et al (2024)

phage therapy candidates from Sphae: An automated toolkit for predicting sequencing data.

bioRxiv : the preprint server for biology.

MOTIVATION: Phage therapy is a viable alternative for treating bacterial infections amidst the escalating threat of antimicrobial resistance. However, the therapeutic success of phage therapy depends on selecting safe and effective phage candidates. While experimental methods focus on isolating phages and determining their lifecycle and host range, comprehensive genomic screening is critical to identify markers that indicate potential risks, such as toxins, antimicrobial resistance, or temperate lifecycle traits. These analyses are often labor-intensive and time-consuming, limiting the rapid deployment of phage in clinical settings.

RESULTS: We developed Sphae, an automated bioinformatics pipeline designed to streamline therapeutic potential of a phage in under ten minutes. Using Snakemake workflow manager, Sphae integrates tools for quality control, assembly, genome assessment, and annotation tailored specifically for phage biology. Sphae automates the detection of key genomic markers, including virulence factors, antimicrobial resistance genes, and lysogeny indicators like integrase, recombinase, and transposase, which could preclude therapeutic use. Benchmarked on 65 phage sequences, 28 phage samples showed therapeutic potential, 8 failed during assembly due to low sequencing depth, 22 samples included prophage or virulent markers, and the remaining 23 samples included multiple phage genomes per sample. This workflow outputs a comprehensive report, enabling rapid assessment of phage safety and suitability for phage therapy under these criteria. Sphae is scalable, portable, facilitating efficient deployment across most high-performance computing (HPC) and cloud platforms, expediting the genomic evaluation process.

AVAILABILITY: Sphae is source code and freely available at https://github.com/linsalrob/sphae, with installation supported on Conda, PyPi, Docker containers.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )