picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
01 Apr 2025 at 01:41
HITS:
3949
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 01 Apr 2025 at 01:41 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-03-31

Yang H, L Jiang (2025)

Regulating neural data processing in the age of BCIs: Ethical concerns and legal approaches.

Digital health, 11:20552076251326123 pii:10.1177_20552076251326123.

Brain-computer interfaces (BCIs) have seen increasingly fast growth under the help from AI, algorithms, and cloud computing. While providing great benefits for both medical and educational purposes, BCIs involve processing of neural data which are uniquely sensitive due to their most intimate nature, posing unique risks and ethical concerns especially related to privacy and safe control of our neural data. In furtherance of human right protection such as mental privacy, data laws provide more detailed and enforceable rules for processing neural data which may balance the tension between privacy protection and need of the public for wellness promotion and scientific progress through data sharing. This article notes that most of the current data laws like GDPR have not covered neural data clearly, incapable of providing full protection in response to its specialty. The new legislative reforms in the U.S. states of Colorado and California made pioneering advances to incorporate neural data into data privacy laws. Yet regulatory gaps remain as such reforms have not provided special additional rules for neural data processing. Potential problems such as static consent, vague research exceptions, and loopholes in regulating non-personal neural data need to be further addressed. We recommend relevant improved measures taken through amending data laws or making special data acts.

RevDate: 2025-03-31

Bai CM, Shu YX, S Zhang (2025)

Authenticable quantum secret sharing based on special entangled state.

Scientific reports, 15(1):10819.

In this paper, a pair of quantum states are constructed based on an orthogonal array and further generalized to multi-body quantum systems. Subsequently, a novel physical process is designed, which is aimed at effectively masking quantum states within multipartite quantum systems. According to this masker, a new authenticable quantum secret sharing scheme is proposed, which can realize a class of special access structures. In the distribution phase, an unknown quantum state is shared safely among multiple participants, and this secret quantum state is embedded into a multi-particle entangled state using the masking approach. In the reconstruction phase, a series of precisely designed measurements and corresponding unitary operations are performed by the participants in the authorized set to restore the original information quantum state. To ensure the security of the scheme, the security analysis of five major types of quantum attacks is conducted. Finally, when compared with other quantum secret sharing schemes based on entangled states, the proposed scheme is found to be not only more flexible but also easier to implement based on existing quantum computing cloud platforms.

RevDate: 2025-03-29
CmpDate: 2025-03-28

Davey BC, Billingham W, Davis JA, et al (2023)

Data resource profile: the ORIGINS project databank: a collaborative data resource for investigating the developmental origins of health and disease.

International journal of population data science, 8(1):2388.

INTRODUCTION: The ORIGINS Project ("ORIGINS") is a longitudinal, population-level birth cohort with data and biosample collections that aim to facilitate research to reduce non-communicable diseases (NCDs) and encourage 'a healthy start to life'. ORIGINS has gathered millions of datapoints and over 400,000 biosamples over 15 timepoints, antenatally through to five years of age, from mothers, non-birthing partners and the child, across four health and wellness domains: 'Growth and development', 'Medical, biological and genetic', 'Biopsychosocial and cognitive', 'Lifestyle, environment and nutrition'.

METHODS: Mothers, non-birthing partners and their offspring were recruited antenatally (between 18 and 38 weeks' gestation) from the Joondalup and Wanneroo communities of Perth, Western Australia from 2017 to 2024. Data come from several sources, including routine hospital antenatal and birthing data, ORIGINS clinical appointments, and online self-completed surveys comprising several standardised measures. Data are merged using the Medical Record Number (MRN), the ORIGINS Unique Identifier and the ORIGINS Pregnancy Number, as well as additional demographic data (e.g. name and date of birth) when necessary.

RESULTS: The data are held on an integrated data platform that extracts, links, ingests, integrates and stores ORIGINS' data on an Amazon Web Services (AWS) cloud-based data warehouse. Data are linked, transformed for cleaning and coding, and catalogued, ready to provide to sub-projects (independent researchers that apply to use ORIGINS data) to prepare for their own analyses. ORIGINS maximises data quality by checking and replacing missing and erroneous data across the various data sources.

CONCLUSION: As a wide array of data across several different domains and timepoints has been collected, the options for future research and utilisation of the data and biosamples are broad. As ORIGINS aims to extend into middle childhood, researchers can examine which antenatal and early childhood factors predict middle childhood outcomes. ORIGINS also aims to link to State and Commonwealth data sets (e.g. Medicare, the National Assessment Program - Literacy and Numeracy, the Pharmaceutical Benefits Scheme) which will cater to a wide array of research questions.

RevDate: 2025-03-28
CmpDate: 2025-03-28

Steiner M, F Huettmann (2025)

Moving beyond the physical impervious surface impact and urban habitat fragmentation of Alaska: quantitative human footprint inference from the first large scale 30 m high-resolution Landscape metrics big data quantification in R and the cloud.

PeerJ, 13:e18894.

With increased globalization, man-made climate change, and urbanization, the landscape-embedded within the Anthropocene-becomes increasingly fragmented. With wilderness habitats transitioning and getting lost, globally relevant regions considered 'pristine', such as Alaska, are no exception. Alaska holds 60% of the U.S. National Park system's area and is of national and international importance, considering the U.S. is one of the wealthiest nations on earth. These characteristics tie into densities and quantities of human features, e.g., roads, houses, mines, wind parks, agriculture, trails, etc., that can be summarized as 'impervious surfaces.' Those are physical impacts and actively affecting urban-driven landscape fragmentation. Using the remote sensing data of the National Land Cover Database (NLCD), here we attempt to create the first quantification of this physical human impact on the Alaskan landscape and its fragmentation. We quantified these impacts using the well-established landscape metrics tool 'Fragstats', implemented as the R package "landscapemetrics" in the desktop software and through the interface of a Linux Cloud-computing environment. This workflow allows for the first time to overcome the computational limitations of the conventional Fragstats software within a reasonably quick timeframe. Thereby, we are able to analyze a land area as large as approx. 1,517,733 km[2] (state of Alaska) while maintaining a high assessment resolution of 30 m. Based on this traditional methodology, we found that Alaska has a reported physical human impact of c. 0.067%. We additionally overlaid other features that were not included in the input data to highlight the overall true human impact (e.g., roads, trails, airports, governance boundaries in game management and park units, mines, etc.). We found that using remote sensing (human impact layers), Alaska's human impact is considerably underestimated to a meaningless estimate. The state is more seriously fragmented and affected by humans than commonly assumed. Very few areas are truly untouched and display a high patch density with corresponding low mean patch sizes throughout the study area. Instead, the true human impact is likely close to 100% throughout Alaska for several metrics. With these newly created insights, we provide the first state-wide landscape data and inference that are likely of considerable importance for land management entities in the state of Alaska, and for the U.S. National Park systems overall, especially in the changing climate. Likewise, the methodological framework presented here shows an Open Access workflow and can be used as a reference to be reproduced virtually anywhere else on the planet to assess more realistic large-scale landscape metrics. It can also be used to assess human impacts on the landscape for more sustainable landscape stewardship and mitigation in policy.

RevDate: 2025-03-28

Chaikovsky I, Dziuba D, Kryvova O, et al (2025)

Subtle changes on electrocardiogram in severe patients with COVID-19 may be predictors of treatment outcome.

Frontiers in artificial intelligence, 8:1561079.

BACKGROUND: Two years after the COVID-19 pandemic, it became known that one of the complications of this disease is myocardial injury. Electrocardiography (ECG) and cardiac biomarkers play a vital role in the early detection of cardiovascular complications and risk stratification. The study aimed to investigate the value of a new electrocardiographic metric for detecting minor myocardial injury in patients during COVID-19 treatment.

METHODS: The study was conducted in 2021. A group of 26 patients with verified COVID-19 diagnosis admitted to the intensive care unit for infectious diseases was examined. The severity of a patient's condition was calculated using the NEWS score. The digital ECGs were repeatedly recorded (at the beginning and 2-4 times during the treatment). A total of 240 primary and composite ECG parameters were analyzed for each electrocardiogram. Among these patients, 6 patients died during treatment. Cluster analysis was used to identify subgroups of patients that differed significantly in terms of disease severity (NEWS), SрО2 and integral ECG index (an indicator of the state of the cardiovascular system).

RESULTS: Using analysis of variance (ANOVA repeated measures), a statistical assessment of changes of indicators in subgroups at the end of treatment was given. These subgroup differences persisted at the end of the treatment. To identify potential predictors of mortality, critical clinical and ECG parameters of surviving (S) and non-surviving patients (D) were compared using parametric and non-parametric statistical tests. A decision tree model to classify survival in patients with COVID-19 was constructed based on partial ECG parameters and NEWS score.

CONCLUSION: A comparison of potential mortality predictors showed no significant differences in vital signs between survivors and non-survivors at the beginning of treatment. A set of ECG parameters was identified that were significantly associated with treatment outcomes and may be predictors of COVID-19 mortality: T-wave morphology (SVD), Q-wave amplitude, and R-wave amplitude (lead I).

RevDate: 2025-03-27

Kodumuru R, Sarkar S, Parepally V, et al (2025)

Artificial Intelligence and Internet of Things Integration in Pharmaceutical Manufacturing: A Smart Synergy.

Pharmaceutics, 17(3): pii:pharmaceutics17030290.

Background: The integration of artificial intelligence (AI) with the internet of things (IoTs) represents a significant advancement in pharmaceutical manufacturing and effectively bridges the gap between digital and physical worlds. With AI algorithms integrated into IoTs sensors, there is an improvement in the production process and quality control for better overall efficiency. This integration facilitates enabling machine learning and deep learning for real-time analysis, predictive maintenance, and automation-continuously monitoring key manufacturing parameters. Objective: This paper reviews the current applications and potential impacts of integrating AI and the IoTs in concert with key enabling technologies like cloud computing and data analytics, within the pharmaceutical sector. Results: Applications discussed herein focus on industrial predictive analytics and quality, underpinned by case studies showing improvements in product quality and reductions in downtime. Yet, many challenges remain, including data integration and the ethical implications of AI-driven decisions, and most of all, regulatory compliance. This review also discusses recent trends, such as AI in drug discovery and blockchain for data traceability, with the intent to outline the future of autonomous pharmaceutical manufacturing. Conclusions: In the end, this review points to basic frameworks and applications that illustrate ways to overcome existing barriers to production with increased efficiency, personalization, and sustainability.

RevDate: 2025-03-26

Hussain A, Aleem M, Ur Rehman A, et al (2025)

DE-RALBA: dynamic enhanced resource aware load balancing algorithm for cloud computing.

PeerJ. Computer science, 11:e2739.

Cloud computing provides an opportunity to gain access to the large-scale and high-speed resources without establishing your own computing infrastructure for executing the high-performance computing (HPC) applications. Cloud has the computing resources (i.e., computation power, storage, operating system, network, and database etc.) as a public utility and provides services to the end users on a pay-as-you-go model. From past several years, the efficient utilization of resources on a compute cloud has become a prime interest for the scientific community. One of the key reasons behind inefficient resource utilization is the imbalance distribution of workload while executing the HPC applications in a heterogenous computing environment. The static scheduling technique usually produces lower resource utilization and higher makespan, while the dynamic scheduling achieves better resource utilization and load-balancing by incorporating a dynamic resource pool. The dynamic techniques lead to increased overhead by requiring a continuous system monitoring, job requirement assessments and real-time allocation decisions. This additional load has the potential to impact the performance and responsiveness on computing system. In this article, a dynamic enhanced resource-aware load balancing algorithm (DE-RALBA) is proposed to mitigate the load-imbalance in job scheduling by considering the computing capabilities of all VMs in cloud computing. The empirical assessments are performed on CloudSim simulator using instances of two scientific benchmark datasets (i.e., heterogeneous computing scheduling problems (HCSP) instances and Google Cloud Jobs (GoCJ) dataset). The obtained results revealed that the DE-RALBA mitigates the load imbalance and provides a significant improvement in terms of makespan and resource utilization against existing algorithms, namely PSSLB, PSSELB, Dynamic MaxMin, and DRALBA. Using HCSP instances, the DE-RALBA algorithm achieves up to 52.35% improved resources utilization as compared to existing technique, while more superior resource utilization is achieved using the GoCJ dataset.

RevDate: 2025-03-26

Ramezani R, Iranmanesh S, Naeim A, et al (2025)

Editorial: Bench to bedside: AI and remote patient monitoring.

Frontiers in digital health, 7:1584443.

RevDate: 2025-03-26

Evangelista JE, Ali-Nasser T, Malek LE, et al (2025)

lncRNAlyzr: Enrichment Analysis for lncRNA Sets.

Journal of molecular biology pii:S0022-2836(25)00004-X [Epub ahead of print].

lncRNAs make up a large portion of the human genome affecting many biological processes in normal physiology and diseases. However, human lncRNAs are understudied compared to protein-coding genes. While there are many tools for performing gene set enrichment analysis for coding genes, few tools exist for lncRNA enrichment analysis. lncRNAlyzr is a webserver application designed for lncRNAs enrichment analysis. lncRNAlyzr has a database containing 33 lncRNA set libraries created by computing correlations between lncRNAs and annotated coding gene sets. After users submit a set of lncRNAs to lncRNAlyzr, the enrichment analysis results are visualized as ball-and-stick subnetworks where nodes are lncRNAs connected to enrichment terms from across selected lncRNA set libraries. To demonstrate lncRNAlyzr, it was used to analyze the effects of knocking down the lncRNA CYTOR in K562 cells. Overall, lncRNAlyzr is an enrichment analysis tool for lncRNAs aiming to further our understanding of lncRNAs functional modules. lncRNAlyzr is available from: https://lncrnalyzr.maayanlab.cloud.

RevDate: 2025-03-27
CmpDate: 2025-03-26

Sng LMF, Kaphle A, O'Brien MJ, et al (2025)

Optimizing UK biobank cloud-based research analysis platform to fine-map coronary artery disease loci in whole genome sequencing data.

Scientific reports, 15(1):10335.

We conducted the first comprehensive association analysis of a coronary artery disease (CAD) cohort within the recently released UK Biobank (UKB) whole genome sequencing dataset. We employed fine mapping tool PolyFun and pinpoint rs10757274 as the most likely causal SNV within the 9p21.3 CAD risk locus. Notably, we show that machine-learning (ML) approaches, REGENIE and VariantSpark, exhibited greater sensitivity compared to traditional single-SNV logistic regression, uncovering rs28451064 a known risk locus in 21q22.11. Our findings underscore the utility of leveraging advanced computational techniques and cloud-based resources for mega-biobank analyses. Aligning with the paradigm shift of bringing compute to data, we demonstrate a 44% cost reduction and 94% speedup through compute architecture optimisation on UK Biobank's Research Analysis Platform using our RAPpoet approach. We discuss three considerations for researchers implementing novel workflows for datasets hosted on cloud-platforms, to pave the way for harnessing mega-biobank-sized data through scalable, cost-effective cloud computing solutions.

RevDate: 2025-03-20

Madan B, Nair S, Katariya N, et al (2025)

Smart waste management and air pollution forecasting: Harnessing Internet of things and fully Elman neural network.

Waste management & research : the journal of the International Solid Wastes and Public Cleansing Association, ISWA [Epub ahead of print].

As the Internet of things (IoT) continues to transform modern technologies, innovative applications in waste management and air pollution monitoring are becoming critical for sustainable development. In this manuscript, a novel smart waste management (SWM) and air pollution forecasting (APF) system is proposed by leveraging IoT sensors and the fully Elman neural network (FENN) model, termed as SWM-APF-IoT-FENN. The system integrates real-time data from waste and air quality sensors including weight, trash level, odour and carbon monoxide (CO) that are collected from smart bins connected to a Google Cloud Server. Here, the MaxAbsScaler is employed for data normalization, ensuring consistent feature representation. Subsequently, the atmospheric contaminants surrounding the waste receptacles were observed using a FENN model. This model is utilized to predict the atmospheric concentration of CO and categorize the bin status as filled, half-filled and unfilled. Moreover, the weight parameter of the FENN model is tuned using the secretary bird optimization algorithm for better prediction results. The implementation of the proposed methodology is done in Python tool, and the performance metrics are analysed. Experimental results demonstrate significant improvements in performance, achieving 15.65%, 18.45% and 21.09% higher accuracy, 18.14%, 20.14% and 24.01% higher F-Measure, 23.64%, 24.29% and 29.34% higher False Acceptance Rate (FAR), 25.00%, 27.09% and 31.74% higher precision, 20.64%, 22.45% and 28.64% higher sensitivity, 26.04%, 28.65% and 32.74% higher specificity, 9.45%, 7.38% and 4.05% reduced computational time than the conventional approaches such as Elman neural network, recurrent artificial neural network and long short-term memory with gated recurrent unit, respectively. Thus, the proposed method offers a streamlined, efficient framework for real-time waste management and pollution forecasting, addressing critical environmental challenges.

RevDate: 2025-03-20

Isaac RA, Sundaravadivel P, Marx VSN, et al (2025)

Enhanced novelty approaches for resource allocation model for multi-cloud environment in vehicular Ad-Hoc networks.

Scientific reports, 15(1):9472.

As the number of service requests for applications continues increasing due to various conditions, the limitations on the number of resources provide a barrier in providing the applications with the appropriate Quality of Service (QoS) assurances. As a result, an efficient scheduling mechanism is required to determine the order of handling application requests, as well as the appropriate use of a broadcast media and data transfer. In this paper an innovative approach, incorporating the Crossover and Mutation (CM)-centered Marine Predator Algorithm (MPA) is introduced for an effective resource allocation. This strategic resource allocation optimally schedules resources within the Vehicular Edge computing (VEC) network, ensuring the most efficient utilization. The proposed method begins by the meticulous feature extraction from the Vehicular network model, with attributes such as mobility patterns, transmission medium, bandwidth, storage capacity, and packet delivery ratio. For further analysis the Elephant Herding Lion Optimizer (EHLO) algorithm is employed to pinpoint the most critical attributes. Subsequently the Modified Fuzzy C-Means (MFCM) algorithm is used for efficient vehicle clustering centred on selected attributes. These clustered vehicle characteristics are then transferred and stored within the cloud server infrastructure. The performance of the proposed methodology is evaluated using MATLAB software using simulation method. This study offers a comprehensive solution to the resource allocation challenge in Vehicular Cloud Networks, addresses the burgeoning demands of modern applications while ensuring QoS assurances and signifies a significant advancement in the field of VEC.

RevDate: 2025-03-20

Rajavel R, Krishnasamy L, Nagappan P, et al (2025)

Cloud-enabled e-commerce negotiation framework using bayesian-based adaptive probabilistic trust management model.

Scientific reports, 15(1):9457.

Enforcing a trust management model in the broker-based negotiation context is identified as a foremost challenge. Creating such trust model is not a pure technical issue, but the technology should enhance the cloud service negotiation framework for improving the utility value and success rate between the bargaining participants (consumer, broker, and service provider) during their negotiation progression. In the existing negotiation frameworks, trusts were established using reputation, self-assessment, identity, evidence, and policy-based evaluation techniques for maximizing the negotiators (cloud participants) utility value and success rate. To further maximization, a Bayesian-based adaptive probabilistic trust management model is enforced in the future broker-based trusted cloud service negotiation framework. This adaptive model dynamically ranks the service provider agents by estimating the success rate, cooperation rate and honesty rate factors to effectively measure the trustworthiness among the participants. The measured trustworthiness value will be used by the broker agents for prioritization of trusted provider agents over the non-trusted provider agents which minimizes the bargaining conflict between the participants and enhance future bargaining progression. In addition, the proposed adaptive probabilistic trust management model formulates the sequence of bilateral negotiation process among the participants as a Bayesian learning process. Finally, the performance of the projected cloud-enabled e-commerce negotiation framework with Bayesian-based adaptive probabilistic trust management model is compared with the existing frameworks by validating under different levels of negotiation rounds.

RevDate: 2025-03-20
CmpDate: 2025-03-20

Savitha C, R Talari (2025)

Evaluating the performance of random forest, support vector machine, gradient tree boost, and CART for improved crop-type monitoring using greenest pixel composite in Google Earth Engine.

Environmental monitoring and assessment, 197(4):437.

The development of machine learning algorithms, along with high-resolution satellite datasets, aids in improved agriculture monitoring and mapping. Nevertheless, the use of high-resolution optical satellite datasets is usually constrained by clouds and shadows, which do not capture complete crop phenology, thus limiting map accuracy. Moreover, the identification of a suitable classification algorithm is essential, as the performance of each machine learning algorithm depends on input datasets, hyperparameter tuning, training, and testing samples, among other factors. To overcome the limitation of clouds and shadow in optical data, this study employs Sentinel-2 greenest pixel composite to generate a nearly accurate crop-type map for an agricultural watershed in Tadepalligudem, India. To identify a suitable machine learning model, the study also evaluates and compares the performance of four machine learning algorithms: gradient tree boost, classification and regression tree, support vector machine, and random forest (RF). Crop-type maps are generated for two cropping seasons, Kharif and Rabi, in Google Earth Engine (GEE), a robust cloud computing platform. Further, to train and test these algorithms, ground truth data is collected and divided in the ratio of 70:30, for training and testing, respectively. The results of the study demonstrated the ability of the greenest pixel composite method to identify and map crop types in small watersheds even during the Kharif season. Further, among the four machine learning algorithms employed, RF is shown to outperform other classification algorithms in both Kharif and Rabi seasons, with an average overall accuracy of 93.21% and a kappa coefficient of 0.89. Furthermore, the study showcases the potential of the cloud computing platform GEE in enhancing automatic agricultural monitoring through satellite datasets while requiring minimal computational storage and processing.

RevDate: 2025-03-19

Ding X, Liu Y, Ning J, et al (2025)

Blockchain-Enhanced Anonymous Data Sharing Scheme for 6G-Enabled Smart Healthcare With Distributed Key Generation and Policy Hiding.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

In recent years, cloud computing has seen widespread application in 6G-enabled smart healthcare, which facilitates the sharing of medical data. Before uploading medical data to cloud server, numerous data sharing schemes employ attribute-based encryption (ABE) to encrypt the sensitive medical data of data owner (DO), and only provide access to date user (DU) who meet certain conditions, which leads to privacy leakage and single points of failure, etc. This paper proposes a blockchain-enhanced anonymous data sharing scheme for 6G-enabled smart healthcare with distributed key generation and policy hiding, termed BADS-ABE, which achieves secure and efficient sharing of sensitive medical data. BADS-ABE designs an anonymous authentication scheme based on Groth signature, which ensures integrity of medical data and protects the identity privacy of DO. Meanwhile, BADS-ABE employs smart contract and Newton interpolation to achieve distributed key generation, which eliminates single point of failure due to the reliance on trusted authority (TA). Moreover, BADS-ABE achieves policy hiding and matching, which avoids the waste of decryption resources and protects the attribute privacy of DO. Finally, security analysis demonstrates that BADS-ABE meets the security requirements of a data sharing scheme for smart healthcare. Performance analysis indicates that BADS-ABE is more efficient compared with similar data sharing schemes.

RevDate: 2025-03-19

Han X, Wang J, Wu J, et al (2025)

Energy-efficient cloud systems: Virtual machine consolidation with Γ -robustness optimization.

iScience, 28(3):111897.

This study addresses the challenge of virtual machine (VM) placement in cloud computing to improve resource utilization and energy efficiency. We propose a mixed integer linear programming (MILP) model incorporating Γ -robustness theory to handle uncertainties in VM usage, optimizing both performance and energy consumption. A heuristic algorithm is developed for large-scale VM allocation. Experiments with Huawei Cloud data demonstrate significant improvements in resource utilization and energy efficiency.

RevDate: 2025-03-19

Sarkar C, Das A, RK Jain (2025)

Development of CoAP protocol for communication in mobile robotic systems using IoT technique.

Scientific reports, 15(1):9269.

This paper proposes a novel design methodology of Constrained Application Protocol (CoAP) protocol for an IoT-enabled mobile robot system to operate remotely and access wirelessly. These devices can be used in different applications such as monitoring, inspection, robotics, healthcare, etc. For communicating with such devices, the different frameworks of IoT can be deployed to attain secured transmission using different protocols such as HTTP, MQTT, CoAP, etc. In this paper, the novel IoT-enabled communication using the CoAP protocol in mobile robotic systems is attempted. A mathematical analysis of the CoAP model is carried out where this protocol provides a faster response within less time and less power consumption as compared to other protocols. The main advantage of the CoAP protocol is to facilitate Machine-to-Machine (M2M) communication which contains features like small packet overhead and less power consumption. An experimental prototype has been developed and several trials have been conducted to evaluate the CoAP protocol's performance for rapid communication within the mobile robotic system. Signal strength analysis is also carried out. This reveals that the reliability of sending signals is up to 99%. Thus, the application of the CoAP protocol shows enough potential to develop IoT-enabled mobile robotic systems and allied applications.

RevDate: 2025-03-17

Liu G, Lei J, Guo Z, et al (2025)

Lightweight obstacle detection for unmanned mining trucks in open-pit mines.

Scientific reports, 15(1):9028.

This paper aims to solve the problem of the difficulty in balancing the model size and detection accuracy of the unmanned mining truck detection network in open-pit mines, as well as the problem that the existing model is not suitable for mining truck equipment. To address this problem, we proposed a lightweight vehicle detection algorithm model based on the improvement of YOLOv8. Through a series of innovative structural adjustments and optimization strategies, the model has achieved high accuracy and low complexity. This paper replaces the backbone network of YOLOv8s with the FasterNet_t0 (FN) network. This network has the advantages of simple structure and high lightweight, which effectively reduces the amount of calculation and parameters of the model. Then the feature extraction structure of the YOLOv8 neck is replaced with the BiFPN (Bi-directional Feature Pyramid Network) structure. By increasing cross-layer connections and removing nodes with low contribution to feature fusion, the fusion and utilization of features of different scales are optimized, the model performance is further improved, and the number of parameters and calculations are reduced. To make up for the possible loss of accuracy caused by lightweight improvements, this paper replaces the detection head with Dynamic Head. This design can introduce the self-attention mechanism from the three dimensions of scale, space, and task, significantly improving the detection accuracy of the model while avoiding the additional computational burden. In terms of loss function, this paper introduces a combination of SIoU loss and NWD (normalized Gaussian Wasserstein distance) loss. These two adjustments enable the model to cope with different scenarios more accurately, especially the detection effect of small target mining trucks is significantly improved. In addition, this paper also adopts the amplitude-based layer adaptive sparse pruning algorithm (LAMP) to further compress the model size while maintaining efficient detection performance. Through this pruning strategy, the model further reduces its dependence on computing resources while maintaining key performance. In the experimental part, a dataset of 3000 images was first constructed, and these images were preprocessed, including image enhancement, denoising, cropping, and scaling. The experimental environment was set up on the Autodl cloud server, using the PyTorch 2.5.1 framework and Python 3.10 environment. Through four sets of ablation experiments, we verified the specific impact of each improvement on the model performance. The experimental results show that the lightweight improvement strategy significantly improves the detection accuracy of the model, while greatly reducing the number of parameters and calculations of the model. Finally, we conducted a comprehensive comparative analysis of the improved YOLOv8s model with other popular algorithms and models. The results show that our model leads in detection accuracy with 76.9%, which is more than 10% higher than the performance of similar models. At the same time, compared with other models that achieve similar accuracy levels, our model is only about 20% of the size. These results fully prove that the improvement strategy we adopted is feasible and has obvious advantages in improving model efficiency.

RevDate: 2025-03-15

Lee H, K Jun (2025)

Range dependent Hamiltonian algorithms for numerical QUBO formulation.

Scientific reports, 15(1):8819.

With the advent and development of quantum computers, various quantum algorithms that can solve linear equations and eigenvalues faster than classical computers have been developed. In particular, a hybrid solver provided by D-Wave's Leap quantum cloud service can utilize up to two million variables. Using this technology, quadratic unconstrained binary optimization (QUBO) models have been proposed for linear systems, eigenvalue problems, RSA cryptosystems, and computed tomography (CT) image reconstructions. Generally, QUBO formulation is obtained through simple arithmetic operations, which offers great potential for future development with the progress of quantum computers. A common method here was to binarize the variables and match them to multiple qubits. To achieve the accuracy of 64 bits per variable, 64 logical qubits must be used. Finding the global minimum energy in quantum optimization becomes more difficult as more logical qubits are used; thus, a quantum parallel computing algorithm that can create and compute multiple QUBO models is introduced here. This new algorithm divides the entire domain each variable can have into multiple subranges to generate QUBO models. This paper demonstrates the superior performance of this new algorithm particularly when utilizing an algorithm for binary variables.

RevDate: 2025-03-14

Weicken E, Mittermaier M, Hoeren T, et al (2025)

[Focus: artificial intelligence in medicine-Legal aspects of using large language models in clinical practice].

Innere Medizin (Heidelberg, Germany) [Epub ahead of print].

BACKGROUND: The use of artificial intelligence (AI) and natural language processing (NLP) methods in medicine, particularly large language models (LLMs), offers opportunities to advance the healthcare system and patient care in Germany. LLMs have recently gained importance, but their practical application in hospitals and practices has so far been limited. Research and implementation are hampered by a complex legal situation. It is essential to research LLMs in clinical studies in Germany and to develop guidelines for users.

OBJECTIVE: How can foundations for the data protection-compliant use of LLMs, particularly cloud-based LLMs, be established in the German healthcare system? The aim of this work is to present the data protection aspects of using cloud-based LLMs in clinical research and patient care in Germany and the European Union (EU); to this end, key statements of a legal opinion on this matter are considered. Insofar as the requirements for use are regulated by state laws (vs. federal laws), the legal situation in Berlin is used as a basis.

MATERIALS AND METHODS: As part of a research project, a legal opinion was commissioned to clarify the data protection aspects of the use of LLMs with cloud-based solutions at the Charité - University Hospital Berlin, Germany. Specific questions regarding the processing of personal data were examined.

RESULTS: The legal framework varies depending on the type of data processing and the relevant federal state (Bundesland). For anonymous data, data protection requirements need not apply. Where personal data is processed, it should be pseudonymized if possible. In the research context, patient consent is usually required to process their personal data, and data processing agreements must be concluded with the providers. Recommendations originating from LLMs must always be reviewed by medical doctors.

CONCLUSIONS: The use of cloud-based LLMs is possible as long as data protection requirements are observed. The legal framework is complex and requires transparency from providers. Future developments could increase the potential of AI and particularly LLMs in everyday clinical practice; however, clear legal and ethical guidelines are necessary.

RevDate: 2025-03-14

Lv F (2025)

Research on optimization strategies of university ideological and political parenting models under the empowerment of digital intelligence.

Scientific reports, 15(1):8680.

The development of big data, artificial intelligence, cloud computing, and other new generations of intellectual technologies has triggered digital changes in university civic education's resources, forms, and modes. It has become a new engine to promote the innovation and development of the civic education model. Digital and intellectual technology-enabled university civic and political education model can carry the concept of innovation through the subject, content, process, and scene of education and promote the development of the ideological and political parenting model in the direction of refinement, specialization, and conscientization. Based on the differential game model, this paper comprehensively considers the model characteristics of universities, enterprises, and governments and their intertemporal characteristics of collaborative parenting and innovation behaviors. It constructs the no-incentive, cost-sharing, and collaborative cooperation models, respectively, and obtains the optimal trajectories for the degree of effort, the subsidy coefficient, the optimal benefit function, and the digital and intelligent technology stock. The conclusions are as follows: (1) resource input cost and technological innovation cost are the key driving variables of university ideological and political parenting; (2) the government's cost subsidy improves the degree of innovation effort of universities and enterprises, and thus achieves Pareto optimality for the three parties; (3) the degree of innovation effort, overall benefit and technology level of the three parties in the synergistic cooperation model is better than that of the other two models. Finally, the validity of the model is verified through numerical simulation analysis. An in-depth discussion of the digital intelligence-enabled ideological and political parenting model is necessary for the high-quality development of education, which helps improve the scientific and practical ideological and political parenting in the digital age.

RevDate: 2025-03-13

Alsharabi N, Alayba A, Alshammari G, et al (2025)

An end-to-end four tier remote healthcare monitoring framework using edge-cloud computing and redactable blockchain.

Computers in biology and medicine, 189:109987 pii:S0010-4825(25)00338-5 [Epub ahead of print].

The Medical Internet of Things (MIoTs) encompasses compact, energy-efficient wireless sensor devices designed to monitor patients' body outcomes. Healthcare networks provide constant data monitoring, enabling patients to live independently. Despite advancements in MIoTs, critical issues persist that can affect the Quality of Service (QoS) in the network. The wearable IoT module collects data and stores it on cloud servers, making it vulnerable to privacy breaches and attacks by unauthorized users. To address these challenges, we propose an end-to-end secure remote healthcare framework called the Four Tier Remote Healthcare Monitoring Framework (FTRHMF). This framework comprises multiple entities, including Wireless Body Sensors (WBS), Distributed Gateway (DGW), Distributed Edge Server (DES), Blockchain Server (BS), and Cloud Server (CS). The framework operates in four tiers. In the first tier, WBS and DGW are authenticated to the BS using secret credentials, ensuring privacy and security for all entities. In the second tier, authenticated WBS transmit data to the DGW via a two-level Hybridized Metaheuristic Secure Federated Clustered Routing Protocol (HyMSFCRP), which leverages Mountaineering Team-Based Optimization (MTBO) and Sea Horse Optimization (SHO) algorithms. In the third tier, sensor reports are prioritized and analyzed using Multi-Agent Deep Reinforcement Learning (MA-DRL), with the results fed into the Hybrid-Transformer Deep Learning (HTDL) model. This model combines Lite Convolutional Neural Network and Swin Transformer networks to detect patient outcomes accurately. Finally, in the fourth tier, patients' outcomes are securely stored in a cloud-assisted redactable blockchain layer, allowing modifications without compromising the integrity of the original data. This research enhance the network lifetime by 18.3 %, reduce the transmission delays by 15.6 %, ensures classification accuracy of 7.4 %, with PSNR of 46.12 dB, SSIM of 0.8894, and MAE of 22.51 when compared to the existing works.

RevDate: 2025-03-13

Alsaleh A (2025)

Toward a conceptual model to improve the user experience of a sustainable and secure intelligent transport system.

Acta psychologica, 255:104892 pii:S0001-6918(25)00205-7 [Epub ahead of print].

The rapid advancement of automotive technologies has spurred the development of innovative applications within intelligent transportation systems (ITS), aimed at enhancing safety, efficiency and sustainability. These applications, such as advanced driver assistance systems (ADAS), vehicle-to-everything (V2X) communication and autonomous driving, are transforming transportation by enabling adaptive cruise control, lane-keeping assistance, real-time traffic management and predictive maintenance. By leveraging cloud computing and vehicular networks, intelligent transportation solutions optimize traffic flow, improve emergency response systems, and forecast potential collisions, contributing to safer and more efficient roads. This study proposes a Vehicular Cloud-based Intelligent Transportation System (VCITS) model, integrating vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication through roadside units (RSUs) and cloudlets to provide real-time access to cloud resources. A novel search and management protocol, supported by a tailored algorithm, was developed to enhance resource allocation success rates for vehicles within a defined area of interest. The study also identifies critical security vulnerabilities in smart vehicle networks, emphasizing the need for robust solutions to protect data integrity and privacy. The simulation experiments evaluated the VCITS model under various traffic densities and resource request scenarios. Results demonstrated that the proposed model effectively maintained service availability rates exceeding 85 % even under high demand. Furthermore, the system exhibited scalability and stability, with minimal service loss and efficient handling of control messages. These findings highlight the potential of the VCITS model to advance smart traffic management while addressing computational efficiency and security challenges. Future research directions include integrating cybersecurity measures and leveraging emerging technologies like 5G and 6G to further enhance system performance and safety.

RevDate: 2025-03-13

Zinchenko A, Fernandez-Gamiz U, Redchyts D, et al (2025)

An efficient parallelization technique for the coupled problems of fluid, gas and plasma mechanics in the grid environment.

Scientific reports, 15(1):8629.

The development of efficient parallelization strategies for numerical simulation methods of fluid, gas and plasma mechanics remains one of the key technology challenges in modern scientific computing. The numerical models of gas and plasma dynamics based on the Navier-Stokes and electrodynamics equations require enormous computational efforts. For such cases, the use of parallel and distributed computing proved to be effective. The Grid computing environment could provide virtually unlimited computational resources and data storage, convenient task launch and monitoring tools, graphical user interfaces such as web portals and visualization systems. However, the deployment of traditional CFD solvers in the Grid environment remains very limited because basically it requires the cluster computing architecture. This study explores the applicability of distributed computing and Grid technologies for solving the weak-coupled problems of fluid, gas and plasma mechanics, including techniques of flow separation control like using plasma actuators to influence boundary layer structure. The adaptation techniques for the algorithms of coupled computational fluid dynamics and electrodynamics problems for distributed computations on grid and cloud infrastructure are presented. A parallel solver suitable for the Grid infrastructure has been developed and the test calculations in the distributed computing environment are performed. The simulation results for partially ionized separated flow behind the circular cylinder are analysed. Discussion includes some performance metrics and parallelization effectiveness estimation. The potential of the Grid infrastructure to provide a powerful and flexible computing environment for fast and efficient solution of weak-coupled problems of fluid, gas and plasma mechanics has been shown.

RevDate: 2025-03-13
CmpDate: 2025-03-13

Puchala S, Muchnik E, Ralescu A, et al (2025)

Automated detection of spreading depolarizations in electrocorticography.

Scientific reports, 15(1):8556.

Spreading depolarizations (SD) in the cerebral cortex are a novel mechanism of lesion development and worse outcomes after acute brain injury, but accurate diagnosis by neurophysiology is a barrier to more widespread application in neurocritical care. Here we developed an automated method for SD detection by training machine-learning models on electrocorticography data from a 14-patient cohort that included 1,548 examples of SD direct-current waveforms as identified in expert manual scoring. As determined by leave-one-patient-out cross-validation, optimal performance was achieved with a gradient-boosting model using 30 features computed from 400-s electrocorticography segments sampled at 0.1 Hz. This model was applied to continuous electrocorticography data by generating a time series of SD probability [PSD(t)], and threshold PSD(t) values to trigger SD predictions were determined empirically. The developed algorithm was then tested on a novel dataset of 10 patients, resulting in 1,252 true positive detections (/1,953; 64% sensitivity) and 323 false positives (6.5/day). Secondary manual review of false positives showed that a majority (224, or 69%) were likely real SDs, highlighting the conservative nature of expert scoring and the utility of automation. SD detection using sparse sampling (0.1 Hz) is optimal for streaming and use in cloud computing applications for neurocritical care.

RevDate: 2025-03-12

Krishna K (2025)

Advancements in cache management: a review of machine learning innovations for enhanced performance and security.

Frontiers in artificial intelligence, 8:1441250.

Machine learning techniques have emerged as a promising tool for efficient cache management, helping optimize cache performance and fortify against security threats. The range of machine learning is vast, from reinforcement learning-based cache replacement policies to Long Short-Term Memory (LSTM) models predicting content characteristics for caching decisions. Diverse techniques such as imitation learning, reinforcement learning, and neural networks are extensively useful in cache-based attack detection, dynamic cache management, and content caching in edge networks. The versatility of machine learning techniques enables them to tackle various cache management challenges, from adapting to workload characteristics to improving cache hit rates in content delivery networks. A comprehensive review of various machine learning approaches for cache management is presented, which helps the community learn how machine learning is used to solve practical challenges in cache management. It includes reinforcement learning, deep learning, and imitation learning-driven cache replacement in hardware caches. Information on content caching strategies and dynamic cache management using various machine learning techniques in cloud and edge computing environments is also presented. Machine learning-driven methods to mitigate security threats in cache management have also been discussed.

RevDate: 2025-03-12

Alyas T, Abbas Q, Niazi S, et al (2025)

Multi blockchain architecture for judicial case management using smart contracts.

Scientific reports, 15(1):8471.

The infusion of technology across various domains, particularly in process-centric and multi-stakeholder sectors, demands transparency, accuracy, and scalability. This paper introduces a blockchain and intelligent contract-based framework for judicial case management, proposing a private-to-public blockchain approach to establish a transparent, decentralized, and robust system. An Integrated Solution for Judicial Case Management using Blockchain Technology and Smart Contracts. This paper aims to introduce a multi-blockchain structure for managing judicial cases based on smart contracts, ultimately rendering cases more transparent, distributed, and tenacious. This solution is innovative because it will leverage both private and public blockchains to satisfy the unique requirements of judicial processes, with transparent public access for authorized digital events and transactions occurring on the freely available blockchain and a three-tiered private blockchain structure to address private stakeholder interactions while ensuring that operational consistency, security, and data privacy requirements are met. Leveraging the decentralized and tamper-proof approach of blockchain and cloud computing, the framework aims to increase data security and cut down on administrative burdens. This framework offers a scalable and secure solution for modernizing judicial systems, supporting smart governance's shift towards digital transparency and accountability.

RevDate: 2025-03-11

Bedia SV, Shapurwala MA, Kharge BP, et al (2025)

A Comprehensive Guide to Implement Artificial Intelligence Cloud Solutions in a Dental Clinic: A Review.

Cureus, 17(2):e78718.

Integrating the artificial intelligence (AI) cloud into dental clinics can enhance diagnostics, streamline operations, and improve patient care. This article explores the adoption of AI-powered cloud solutions in dental clinics, focusing on infrastructure requirements, software licensing, staff training, system optimization, and the challenges faced during implementation. It provides a detailed guide for dental practices to transition to AI cloud systems. We reviewed existing literature, technological guidelines, and practical implementation strategies for integrating AI cloud in dental practices. The methodology includes a step-by-step approach to understanding clinic needs, selecting appropriate software, training staff, and ensuring system optimization and maintenance. Integrating AI cloud solutions can drastically improve clinical outcomes and operational efficiency. Despite the challenges, proper planning, infrastructure investment, and continuous training can ensure a smooth transition and maximize the benefits of AI technologies in dental care.

RevDate: 2025-03-10

Alshardan A, Mahgoub H, Alahmari S, et al (2025)

Cloud-to-Thing continuum-based sports monitoring system using machine learning and deep learning model.

PeerJ. Computer science, 11:e2539.

Sports monitoring and analysis have seen significant advancements by integrating cloud computing and continuum paradigms facilitated by machine learning and deep learning techniques. This study presents a novel approach for sports monitoring, specifically focusing on basketball, that seamlessly transitions from traditional cloud-based architectures to a continuum paradigm, enabling real-time analysis and insights into player performance and team dynamics. Leveraging machine learning and deep learning algorithms, our framework offers enhanced capabilities for player tracking, action recognition, and performance evaluation in various sports scenarios. The proposed Cloud-to-Thing continuum-based sports monitoring system utilizes advanced techniques such as Improved Mask R-CNN for pose estimation and a hybrid metaheuristic algorithm combined with a generative adversarial network (GAN) for classification. Our system significantly improves latency and accuracy, reducing latency to 5.1 ms and achieving an accuracy of 94.25%, which outperforms existing methods in the literature. These results highlight the system's ability to provide real-time, precise, and scalable sports monitoring, enabling immediate feedback for time-sensitive applications. This research has significantly improved real-time sports event analysis, contributing to improved player performance evaluation, enhanced team strategies, and informed tactical adjustments.

RevDate: 2025-03-10

Rajagopal D, PKT Subramanian (2025)

AI augmented edge and fog computing for Internet of Health Things (IoHT).

PeerJ. Computer science, 11:e2431.

Patients today seek a more advanced and personalized health-care system that keeps up with the pace of modern living. Cloud computing delivers resources over the Internet and enables the deployment of an infinite number of applications to provide services to many sectors. The primary limitation of these cloud frameworks right now is their limited scalability, which results in their inability to meet needs. An edge/fog computing environment, paired with current computing techniques, is the answer to fulfill the energy efficiency and latency requirements for the real-time collection and analysis of health data. Additionally, the Internet of Things (IoT) revolution has been essential in changing contemporary healthcare systems by integrating social, economic, and technological perspectives. This requires transitioning from unadventurous healthcare systems to more adapted healthcare systems that allow patients to be identified, managed, and evaluated more easily. These techniques allow data from many sources to be integrated to effectively assess patient health status and predict potential preventive actions. A subset of the Internet of Things, the Internet of Health Things (IoHT) enables the remote exchange of data for physical processes like patient monitoring, treatment progress, observation, and consultation. Previous surveys related to healthcare mainly focused on architecture and networking, which left untouched important aspects of smart systems like optimal computing techniques such as artificial intelligence, deep learning, advanced technologies, and services that includes 5G and unified communication as a service (UCaaS). This study aims to examine future and existing fog and edge computing architectures and methods that have been augmented with artificial intelligence (AI) for use in healthcare applications, as well as defining the demands and challenges of incorporating fog and edge computing technology in IoHT, thereby helping healthcare professionals and technicians identify the relevant technologies required based on their need for developing IoHT frameworks for remote healthcare. Among the crucial elements to take into account in an IoHT framework are efficient resource management, low latency, and strong security. This review addresses several machine learning techniques for efficient resource management in the IoT, where machine learning (ML) and AI are crucial. It has been noted how the use of modern technologies, such as narrow band-IoT (NB-IoT) for wider coverage and Blockchain technology for security, is transforming IoHT. The last part of the review focuses on the future challenges posed by advanced technologies and services. This study provides prospective research suggestions for enhancing edge and fog computing services for healthcare with modern technologies in order to give patients with an improved quality of life.

RevDate: 2025-03-07
CmpDate: 2025-03-07

Parciak M, Pierlet N, LM Peeters (2025)

Empowering Health Care Actors to Contribute to the Implementation of Health Data Integration Platforms: Retrospective of the medEmotion Project.

Journal of medical Internet research, 27:e68083 pii:v27i1e68083.

Health data integration platforms are vital to drive collaborative, interdisciplinary medical research projects. Developing such a platform requires input from different stakeholders. Managing these stakeholders and steering platform development is challenging, and misaligning the platform to the partners' strategies might lead to a low acceptance of the final platform. We present the medEmotion project, a collaborative effort among 7 partners from health care, academia, and industry to develop a health data integration platform for the region of Limburg in Belgium. We focus on the development process and stakeholder engagement, aiming to give practical advice for similar future efforts based on our reflections on medEmotion. We introduce Personas to paraphrase different roles that stakeholders take and Demonstrators that summarize personas' requirements with respect to the platform. Both the personas and the demonstrators serve 2 purposes. First, they are used to define technical requirements for the medEmotion platform. Second, they represent a communication vehicle that simplifies discussions among all stakeholders. Based on the personas and demonstrators, we present the medEmotion platform based on components from the Microsoft Azure cloud. The demonstrators are based on real-world use cases and showcase the utility of the platform. We reflect on the development process of medEmotion and distill takeaway messages that will be helpful for future projects. Investing in community building, stakeholder engagement, and education is vital to building an ecosystem for a health data integration platform. Instead of academic-led projects, the health care providers themselves ideally drive collaboration among health care providers. The providers are best positioned to address hospital-specific requirements, while academics take a neutral mediator role. This also includes the ideation phase, where it is vital to ensure the involvement of all stakeholders. Finally, balancing innovation with implementation is key to developing an innovative yet sustainable health data integration platform.

RevDate: 2025-03-06

Lee H, Kim W, Kwon N, et al (2025)

Lessons from national biobank projects utilizing whole-genome sequencing for population-scale genomics.

Genomics & informatics, 23(1):8.

Large-scale national biobank projects utilizing whole-genome sequencing have emerged as transformative resources for understanding human genetic variation and its relationship to health and disease. These initiatives, which include the UK Biobank, All of Us Research Program, Singapore's PRECISE, Biobank Japan, and the National Project of Bio-Big Data of Korea, are generating unprecedented volumes of high-resolution genomic data integrated with comprehensive phenotypic, environmental, and clinical information. This review examines the methodologies, contributions, and challenges of major WGS-based national genome projects worldwide. We first discuss the landscape of national biobank initiatives, highlighting their distinct approaches to data collection, participant recruitment, and phenotype characterization. We then introduce recent technological advances that enable efficient processing and analysis of large-scale WGS data, including improvements in variant calling algorithms, innovative methods for creating multi-sample VCFs, optimized data storage formats, and cloud-based computing solutions. The review synthesizes key discoveries from these projects, particularly in identifying expression quantitative trait loci and rare variants associated with complex diseases. Our review introduces the latest findings from the National Project of Bio-Big Data of Korea, which has advanced our understanding of population-specific genetic variation and rare diseases in Korean and East Asian populations. Finally, we discuss future directions and challenges in maximizing the impact of these resources on precision medicine and global health equity. This comprehensive examination demonstrates how large-scale national genome projects are revolutionizing genetic research and healthcare delivery while highlighting the importance of continued investment in diverse, population-specific genomic resources.

RevDate: 2025-03-06

Zhang G (2025)

Cloud computing convergence: integrating computer applications and information management for enhanced efficiency.

Frontiers in big data, 8:1508087.

This study examines the transformative impact of cloud computing on the integration of computer applications and information management systems to improve operational efficiency. Grounded in a robust methodological framework, the research employs experimental testing and comparative data analysis to assess the performance of an information management system within a cloud computing environment. Data was meticulously collected and analyzed, highlighting a threshold where user demand surpasses 400, leading to a stabilization in CPU utilization at an optimal level and maintaining subsystem response times consistently below 5 s. This comprehensive evaluation underscores the significant advantages of cloud computing, demonstrating its capacity to optimize the synergy between computer applications and information management. The findings not only contribute to theoretical advancements in the field but also offer actionable insights for organizations seeking to enhance efficiency through effective cloud-based solutions.

RevDate: 2025-03-06

Saeedbakhsh S, Mohammadi M, Younesi S, et al (2025)

Using Internet of Things for Child Care: A Systematic Review.

International journal of preventive medicine, 16:3.

BACKGROUND: In smart cities, prioritizing child safety through affordable technology like the Internet of Things (IoT) is crucial for parents. This study seeks to investigate different IoT tools that can prevent and address accidents involving children. The goal is to alleviate the emotional and financial toll of such incidents due to their high mortality rates.

METHODS: This study considers articles published in English that use IoT for children's healthcare. PubMed, Science Direct, and Web of Science databases are considered as searchable databases. 273 studies were retrieved after the initial search. After eliminating duplicate records, studies were assessed based on input and output criteria. Titles and abstracts were reviewed for relevance. Articles not meeting criteria were excluded. Finally, 29 cases had the necessary criteria to enter this study.

RESULTS: The study reveals that India is at the forefront of IoT research for children, followed by Italy and China. Studies mainly occur indoors, utilizing wearable sensors like heart rate, motion, and tracking sensors. Biosignal sensors and technologies such as Zigbee and image recognition are commonly used for data collection and analysis. Diverse approaches, including cloud computing and machine vision, are applied in this innovative field.

CONCLUSIONS: In conclusion, IoT for children is mainly seen in developed countries like India, Italy, and China. Studies focus on indoor use, using wearable sensors for heart rate monitoring. Biosignal sensors and various technologies like Zigbee, Kinect, image recognition, RFID, and robots contribute to enhancing children's well-being.

RevDate: 2025-03-05

Efendi A, Ammarullah MI, Isa IGT, et al (2025)

IoT-Based Elderly Health Monitoring System Using Firebase Cloud Computing.

Health science reports, 8(3):e70498 pii:HSR270498.

BACKGROUND AND AIMS: The increasing elderly population presents significant challenges for healthcare systems, necessitating innovative solutions for continuous health monitoring. This study develops and validates an IoT-based elderly monitoring system designed to enhance the quality of life for elderly people. The system features a robust Android-based user interface integrated with the Firebase cloud platform, ensuring real-time data collection and analysis. In addition, a supervised machine learning technology is implemented to conduct prediction task of the observed user whether in "stable" or "not stable" condition based on real-time parameter.

METHODS: The system architecture adopts the IoT layer including physical layer, network layer, and application layer. Device validation is conducted by involving six participants to measure the real-time data of heart-rate, oxygen saturation, and body temperature, then analysed by mean average percentage error (MAPE) to define error rate. A comparative experiment is conducted to define the optimal supervised machine learning model to be deployed into the system by analysing evaluation metrics. Meanwhile, the user satisfaction aspect evaluated by the terms of usability, comfort, security, and effectiveness.

RESULTS: IoT-based elderly health monitoring system has been constructed with a MAPE of 0.90% across the parameters: heart-rate (1.68%), oxygen saturation (0.57%), and body temperature (0.44%). In machine learning experiment indicates XGBoost model has the optimal performance based on the evaluation metrics of accuracy and F1 score which generates 0.973 and 0.970, respectively. In user satisfaction aspect based on usability, comfort, security, and effectiveness achieving a high rating of 86.55%.

CONCLUSION: This system offers practical applications for both elderly users and caregivers, enabling real-time monitoring of health conditions. Future enhancements may include integration with artificial intelligence technologies such as machine learning and deep learning to predict health conditions from data patterns, further improving the system's capabilities and effectiveness in elderly care.

RevDate: 2025-03-05
CmpDate: 2025-03-05

Duan S, Yong R, Yuan H, et al (2024)

Automated Offline Smartphone-Assisted Microfluidic Paper-Based Analytical Device for Biomarker Detection of Alzheimer's Disease.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2024:1-5.

This paper presents a smartphone-assisted microfluidic paper-based analytical device (μPAD), which was applied to detect Alzheimer's disease biomarkers, especially in resource-limited regions. This device implements deep learning (DL)-assisted offline smartphone detection, eliminating the requirement for large computing devices and cloud computing power. In addition, a smartphone-controlled rotary valve enables a fully automated colorimetric enzyme-linked immunosorbent assay (c-ELISA) on μPADs. It reduces detection errors caused by human operation and further increases the accuracy of μPAD c-ELISA. We realized a sandwich c-ELISA targeting β-amyloid peptide 1-42 (Aβ 1-42) in artificial plasma, and our device provided a detection limit of 15.07 pg/mL. We collected 750 images for the training of the DL YOLOv5 model. The training accuracy is 88.5%, which is 11.83% higher than the traditional curve-fitting result analysis method. Utilizing the YOLOv5 model with the NCNN framework facilitated offline detection directly on the smartphone. Furthermore, we developed a smartphone application to operate the experimental process, realizing user-friendly rapid sample detection.

RevDate: 2025-03-05
CmpDate: 2025-03-05

Delannes-Molka D, Jackson KL, King E, et al (2024)

Towards Markerless Motion Estimation of Human Functional Upper Extremity Movement.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2024:1-7.

Markerless motion capture of human movement is a potentially useful approach for providing movement scientists and rehabilitation specialists with a portable and low-cost method for measuring functional upper extremity movement. This is in contrast with optical and inertial motion capture systems, which often require specialized equipment and expertise to use. Existing methods for markerless motion capture have focused on inferring 2D or 3D keypoints on the body and estimating volumetric representations, both using RGB-D. The keypoints and volumes are then used to compute quantities like joint angles and velocity magnitude over time. However, these methods do not have sufficient accuracy to capture fine human motions and, as a result, have largely been restricted to capturing gross movements and rehabilitation games. Furthermore, most of these methods have not used depth images to estimate motion directly. This work proposes using the depth images from an RGB-D camera to compute the upper extremity motion directly by segmenting the upper extremity into components of a kinematic chain, estimating the motion of the rigid portions (i.e., the upper and lower arm) using ICP or Distance Transform across sequential frames, and computing the motion of the end-effector (e.g., wrist) relative to the torso. Methods with data from both the Microsoft Azure Kinect Camera and 9-camera OptiTrack Motive motion capture system (Mocap) were compared. Point Cloud methods performed comparably to Mocap on tracking rotation and velocity of a human arm and could be an affordable alternative to Mocap in the future. While the methods were tested on gross motions, future works would include refining and evaluating these methods for fine motion.

RevDate: 2025-03-04
CmpDate: 2025-03-05

Alshemaimri B, Badshah A, Daud A, et al (2025)

Regional computing approach for educational big data.

Scientific reports, 15(1):7619.

The educational landscape is witnessing a transformation with the integration of Educational Technology (Edutech). As educational institutions adopt digital platforms and tools, the generation of Educational Big Data (EBD) has significantly increased. Research indicates that educational institutions produce massive data, including student enrollment records, academic performance metrics, attendance records, learning activities, and interactions within digital learning environments. This influx of data needs efficient processing to derive actionable insights and enhance the learning experience. Real-time data processing has a critical part in educational environments to support various functions such as personalized learning, adaptive assessment, and administrative decision-making. However, there may be challenges in sending large amounts of educational data to cloud servers, i.e., latency, cost and network congestion. These challenges make it more difficult to provide educators and students with timely insights and services, which reduces the efficiency of educational activities. This paper proposes a Regional Computing (RC) paradigm designed specifically for big data management in education to address these issues. In this case, RC is established within educational regions and intended to decentralize data processing. To reduce dependency on cloud infrastructure, these regional servers are strategically located to collect, process, and store big data related to education regionally. Our investigation results show that RC significantly reduces latency to 203.11 ms for 2,000 devices, compared to 707.1 ms in Cloud Computing (CC). It is also more cost-efficient, with a total cost of just 1.14 USD versus 5.36 USD in the cloud. Furthermore, it avoids the 600% congestion surges seen in cloud setups and maintains consistent throughput under high workloads, establishing RC as the optimal solution for managing EBD.

RevDate: 2025-03-03

Verdet A, Hamdaqa M, Silva LD, et al (2025)

Assessing the adoption of security policies by developers in terraform across different cloud providers.

Empirical software engineering, 30(3):74.

Cloud computing has become popular thanks to the widespread use of Infrastructure as Code (IaC) tools, allowing the community to manage and configure cloud infrastructure using scripts. However, the scripting process does not automatically prevent practitioners from introducing misconfigurations, vulnerabilities, or privacy risks. As a result, ensuring security relies on practitioners' understanding and the adoption of explicit policies. To understand how practitioners deal with this problem, we perform an empirical study analyzing the adoption of scripted security best practices present in Terraform files, applied on AWS, Azure, and Google Cloud. We assess the adoption of these practices by analyzing a sample of 812 open-source GitHub projects. We scan each project's configuration files, looking for policy implementation through static analysis (Checkov and Tfsec). The category Access policy emerges as the most widely adopted in all providers, while Encryption at rest presents the most neglected policies. Regarding the cloud providers, we observe that AWS and Azure present similar behavior regarding attended and neglected policies. Finally, we provide guidelines for cloud practitioners to limit infrastructure vulnerability and discuss further aspects associated with policies that have yet to be extensively embraced within the industry.

RevDate: 2025-03-02

Zhang A, Tariq A, Quddoos A, et al (2025)

Spatio-temporal analysis of urban expansion and land use dynamics using google earth engine and predictive models.

Scientific reports, 15(1):6993.

Urban expansion and changes in land use/land cover (LULC) have intensified in recent decades due to human activity, influencing ecological and developmental landscapes. This study investigated historical and projected LULC changes and urban growth patterns in the districts of Multan and Sargodha, Pakistan, using Landsat satellite imagery, cloud computing, and predictive modelling from 1990 to 2030. The analysis of satellite images was grouped into four time periods (1990-2000, 2000-2010, 2010-2020, and 2020-2030). The Google Earth Engine cloud-based platform facilitated the classification of Landsat 5 ETM (1990, 2000, and 2010) and Landsat 8 OLI (2020) images using the Random Forest model. A simulation model integrating Cellular Automata and an Artificial Neural Network Multilayer Perceptron in the MOLUSCE plugin of QGIS was employed to forecast urban growth to 2030. The resulting maps showed consistently high accuracy levels exceeding 92% for both districts across all time periods. The analysis revealed that Multan's built-up area increased from 240.56 km[2] (6.58%) in 1990 to 440.30 km[2] (12.04%) in 2020, while Sargodha experienced more dramatic growth from 730.91 km[2] (12.69%) to 1,029.07 km[2] (17.83%). Vegetation cover remained dominant but showed significant variations, particularly in peri-urban areas. By 2030, Multan's urban area is projected to stabilize at 433.22 km[2], primarily expanding in the southeastern direction. Sargodha is expected to reach 1,404.97 km[2], showing more balanced multi-directional growth toward the northeast and north. The study presents an effective analytical method integrating cloud processing, GIS, and change simulation modeling to evaluate urban growth spatiotemporal patterns and LULC changes. This approach successfully identified the main LULC transformations and trends in the study areas while highlighting potential urbanization zones where opportunities exist for developing planned and managed urban settlements.

RevDate: 2025-02-27

Xiang Z, Ying F, Xue X, et al (2025)

Unmanned-Aerial-Vehicle Trajectory Planning for Reliable Edge Data Collection in Complex Environments.

Biomimetics (Basel, Switzerland), 10(2):.

With the rapid advancement of edge-computing technology, more computing tasks are moving from traditional cloud platforms to edge nodes. This shift imposes challenges on efficiently handling the substantial data generated at the edge, especially in extreme scenarios, where conventional data collection methods face limitations. UAVs have emerged as a promising solution for overcoming these challenges by facilitating data collection and transmission in various environments. However, existing UAV trajectory optimization algorithms often overlook the critical factor of the battery capacity, leading to potential mission failures or safety risks. In this paper, we propose a trajectory planning approach Hyperion that incorporates charging considerations and employs a greedy strategy for decision-making to optimize the trajectory length and energy consumption. By ensuring the UAV's ability to return to the charging station after data collection, our method enhances task reliability and UAV adaptability in complex environments.

RevDate: 2025-02-27

Huba M, Bistak P, Skrinarova J, et al (2025)

Performance Portrait Method: Robust Design of Predictive Integral Controller.

Biomimetics (Basel, Switzerland), 10(2):.

The performance portrait method (PPM) can be characterized as a systematized digitalized version of the trial and error method-probably the most popular and very often used method of engineering work. Its digitization required the expansion of performance measures used to evaluate the step responses of dynamic systems. Based on process modeling, PPM also contributed to the classification of models describing linear and non-linear dynamic processes so that they approximate their dynamics using the smallest possible number of numerical parameters. From most bio-inspired procedures of artificial intelligence and optimization used for the design of automatic controllers, PPM is distinguished by the possibility of repeated application of once generated performance portraits (PPs). These represent information about the process obtained by evaluating the performance of setpoint and disturbance step responses for all relevant values of the determining loop parameters organized into a grid. It can be supported by the implementation of parallel calculations with optimized decomposition in the high-performance computing (HPC) cloud. The wide applicability of PPM ranges from verification of analytically calculated optimal settings achieved by various approaches to controller design, to the analysis as well as optimal and robust setting of controllers for processes where other known control design methods fail. One such situation is illustrated by an example of predictive integrating (PrI) controller design for processes with a dominant time-delayed sensor dynamics, representing a counterpart of proportional-integrating (PI) controllers, the most frequently used solutions in practice. PrI controllers can be considered as a generalization of the disturbance-response feedback-the oldest known method for the design of dead-time compensators by Reswick. In applications with dominant dead-time and loop time constants located in the feedback (sensors), as those, e.g., met in magnetoencephalography (MEG), it makes it possible to significantly improve the control performance. PPM shows that, despite the absence of effective analytical control design methods for such situations, it is possible to obtain high-quality optimal solutions for processes that require working with uncertain models specified by interval parameters, while achieving invariance to changes in uncertain parameters.

RevDate: 2025-02-26

He J, Sui D, Li L, et al (2025)

Fueling the development of elderly care services in China with digital technology: A provincial panel data analysis.

Heliyon, 11(3):e41490.

BACKGROUND: The global demographic shift towards an aging population presents significant challenges to elderly care cervices, which encompass the range of services designed to meet the health and social needs of older adults. Particularly in China, the aging society's diverse needs are often met with service inadequacies and inefficient resource allocation within the elderly care cervices framework.

OBJECTIVE: This study aims to investigate the transformative potential of digital technology, which includes innovations such as e-commerce, cloud computing, and artificial intelligence, on elderly care cervices in China. The objective is to assess the impact of digital technology on service quality, resource allocation, and operational efficiency within the elderly care cervices domain.

METHODS: Employing Stata software, the study conducts an analysis of panel data from 30 Chinese provinces over the period from 2014 to 2021, examining the integration and application of digital technology within elderly care cervices to identify trends and correlations.

RESULTS: The findings reveal that the integration of digital technology significantly enhances elderly care cervices, improving resource allocation and personalizing care, which in turn boosts the quality of life for the elderly. Specifically, a one-percentage point increase in the development and adoption of digital technology within elderly care cervices is associated with a 21.5 percentage point increase in care quality.

CONCLUSION: This research underscores the pivotal role of digital technology in revolutionizing elderly care cervices. The findings offer a strategic guide for policymakers and stakeholders to effectively harness digital technology, addressing the challenges posed by an aging society and enhancing the efficiency and accessibility of elderly care cervices in China. The application of digital technology in elderly care cervices is set to become a cornerstone in the future of elderly care, ensuring that the needs of the aging population are met with innovative and compassionate solutions.

RevDate: 2025-02-26

Awasthi C, Awasthi SP, PK Mishra (2024)

Secure and Reliable Fog-Enabled Architecture Using Blockchain With Functional Biased Elliptic Curve Cryptography Algorithm for Healthcare Services.

Blockchain in healthcare today, 7:.

Fog computing (FC) is an emerging technology that extends the capability and efficiency of cloud computing networks by acting as a bridge among the cloud and the device. Fog devices can process an enormous volume of information locally, are transportable, and can be deployed on a variety of systems. Because of its real-time processing and event reactions, it is ideal for healthcare. With such a wide range of characteristics, new security and privacy concerns arise. Due to the safe transmission, arrival, and access, as well as the availability of medical devices, security creates new issues in the area of healthcare. As an outcome, FC necessitates a unique approach to security and privacy metrics, as opposed to standard cloud computing methods. Hence, this article suggests an effective blockchain depending on secure healthcare services in FC. Here, the fog nodes gather the information from the medical sensor device and the data are validated using smart contracts in the blockchain network. We propose a functional biased elliptic curve cryptography algorithm to encrypt the data. The optimization is performed using the galactic bee colony optimization algorithm to enhance the procedure of encryption. The performance of the suggested methodology is assessed and contrasted with the traditional techniques. It is proved that the combination of FC with blockchain has increased the security of data transmission in healthcare services.

RevDate: 2025-03-05

Jin J, Li B, Wang X, et al (2025)

PennPRS: a centralized cloud computing platform for efficient polygenic risk score training in precision medicine.

medRxiv : the preprint server for health sciences.

Polygenic risk scores (PRS) are becoming increasingly vital for risk prediction and stratification in precision medicine. However, PRS model training presents significant challenges for broader adoption of PRS, including limited access to computational resources, difficulties in implementing advanced PRS methods, and availability and privacy concerns over individual-level genetic data. Cloud computing provides a promising solution with centralized computing and data resources. Here we introduce PennPRS (https://pennprs.org), a scalable cloud computing platform for online PRS model training in precision medicine. We developed novel pseudo-training algorithms for multiple PRS methods and ensemble approaches, enabling model training without requiring individual-level data. These methods were rigorously validated through extensive simulations and large-scale real data analyses involving over 6,000 phenotypes across various data sources. PennPRS supports online single- and multi-ancestry PRS training with seven methods, allowing users to upload their own data or query from more than 27,000 datasets in the GWAS Catalog, submit jobs, and download trained PRS models. Additionally, we applied our pseudo-training pipeline to train PRS models for over 8,000 phenotypes and made their PRS weights publicly accessible. In summary, PennPRS provides a novel cloud computing solution to improve the accessibility of PRS applications and reduce disparities in computational resources for the global PRS research community.

RevDate: 2025-02-21

Wolski M, Woloszynski T, Stachowiak G, et al (2025)

Bone Data Lake: A storage platform for bone texture analysis.

Proceedings of the Institution of Mechanical Engineers. Part H, Journal of engineering in medicine [Epub ahead of print].

Trabecular bone (TB) texture regions selected on hand and knee X-ray images can be used to detect and predict osteoarthritis (OA). However, the analysis has been impeded by increasing data volume and diversification of data formats. To address this problem, a novel storage platform, called Bone Data Lake (BDL) is proposed for the collection and retention of large numbers of images, TB texture regions and parameters, regardless of their structure, size and source. BDL consists of three components, i.e.: a raw data storage, a processed data storage, and a data reference system. The performance of the BDL was evaluated using 20,000 knee and hand X-ray images of various formats (DICOM, PNG, JPEG, BMP, and compressed TIFF) and sizes (from 0.3 to 66.7 MB). The images were uploaded into BDL and automatically converted into a standardized 8-bit grayscale uncompressed TIFF format. TB regions of interest were then selected on the standardized images, and a data catalog containing metadata information about the regions was constructed. Next, TB texture parameters were calculated for the regions using Variance Orientation Transform (VOT) and Augmented VOT (AVOT) methods and stored in XLSX files. The files were uploaded into BDL, and then transformed into CSV files and cataloged. Results showed that the BDL efficiently transforms images and catalogs bone regions and texture parameters. BDL can serve as the foundation of a reliable, secure and collaborative system for OA detection and prediction based on radiographs and TB texture.

RevDate: 2025-02-23

Shahid U, Kanwal S, Bano M, et al (2025)

Blockchain driven medical image encryption employing chaotic tent map in cloud computing.

Scientific reports, 15(1):6236.

Data security during transmission over public networks has become a key concern in an era of rapid digitization. Image data is especially vulnerable since it can be stored or transferred using public cloud services, making it open to illegal access, breaches, and eavesdropping. This work suggests a novel way to integrate blockchain technology with a Chaotic Tent map encryption scheme in order to overcome these issues. The outcome is a Blockchain driven Chaotic Tent Map Encryption Scheme (BCTMES) for secure picture transactions. The idea behind this strategy is to ensure an extra degree of security by fusing the distributed and immutable properties of blockchain technology with the intricate encryption offered by chaotic maps. To ensure that the image is transformed into a cipher form that is resistant to several types of attacks, the proposed BCTMES first encrypts it using the Chaotic Tent map encryption technique. The accompanying signed document is safely kept on the blockchain, and this encrypted image is subsequently uploaded to the cloud. The integrity and authenticity of the image are confirmed upon retrieval by utilizing blockchain's consensus mechanism, adding another layer of security against manipulation. Comprehensive performance evaluations show that BCTMES provides notable enhancements in important security parameters, such as entropy, correlation coefficient, key sensitivity, peak signal-to-noise ratio (PSNR), unified average changing intensity (UACI), and number of pixels change rate (NPCR). In addition to providing good defense against brute-force attacks, the high key size of [Formula: see text] further strengthens the system's resilience. To sum up, the BCTMES effectively addresses a number of prevalent risks to picture security and offers a complete solution that may be implemented in cloud-based settings where data integrity and privacy are crucial. This work suggests a promising path for further investigation and practical uses in secure image transmission.

RevDate: 2025-02-22

Quevedo D, Do K, Delic G, et al (2025)

GPU Implementation of a Gas-Phase Chemistry Solver in the CMAQ Chemical Transport Model.

ACS ES&T air, 2(2):226-235.

The Community Multiscale Air Quality (CMAQ) model simulates atmospheric phenomena, including advection, diffusion, gas-phase chemistry, aerosol physics and chemistry, and cloud processes. Gas-phase chemistry is often a major computational bottleneck due to its representation as large systems of coupled nonlinear stiff differential equations. We leverage the parallel computational performance of graphics processing unit (GPU) hardware to accelerate the numerical integration of these systems in CMAQ's CHEM module. Our implementation, dubbed CMAQ-CUDA, in reference to its use in the Compute Unified Device Architecture (CUDA) general purpose GPU (GPGPU) computing solution, migrates CMAQ's Rosenbrock solver from Fortran to CUDA Fortran. CMAQ-CUDA accelerates the Rosenbrock solver such that simulations using the chemical mechanisms RACM2, CB6R5, and SAPRC07 require only 51%, 50%, or 35% as much time, respectively, as CMAQv5.4 to complete a chemistry time step. Our results demonstrate that CMAQ is amenable to GPU acceleration and highlight a novel Rosenbrock solver implementation for reducing the computational burden imposed by the CHEM module.

RevDate: 2025-02-20

Wu S, Bin G, Shi W, et al (2024)

Empowering diabetic foot ulcer prevention: A novel cloud-based plantar pressure monitoring system for enhanced self-care.

Technology and health care : official journal of the European Society for Engineering and Medicine [Epub ahead of print].

BACKGROUND: This study was prompted by the crucial impact of abnormal plantar pressure on diabetic foot ulcer development and the notable lack of its monitoring in daily life. Our research introduces a cloud-based, user-friendly plantar pressure monitoring system designed for seamless integration into daily routines.

OBJECTIVE: This innovative system aims to enable early ulcer prediction and proactive prevention, thereby substantially improving diabetic foot care through enhanced self-care and timely intervention.

METHODS: A novel, user-centric plantar pressure monitoring system was developed, integrating a wearable device, mobile application, and cloud computing for instantaneous diabetic foot care. This configuration facilitates comprehensive monitoring at 64 underfoot points. It encourages user engagement in health management. The system wirelessly transmits data to the cloud, where insights are processed and made available on the app, fostering proactive self-care through immediate feedback. Tailored for daily use, our system streamlines home monitoring, enhancing early ulcer detection and preventative measures.

RESULTS: A feasibility study validated our system's accuracy, demonstrating a relative error of approximately 4% compared to a commercial pressure sensing walkway. This precision affirms the system's efficacy for home-based monitoring and its potential in diabetic foot ulcer prevention, positioning it as a viable instrument for self-managed care.

CONCLUSIONS: The system dynamically captures and analyzes plantar pressure distribution and gait cycle details, highlighting its utility in early diabetic foot ulcer detection and management. Offering real-time, actionable data, it stands as a critical tool for individuals to actively participate in their foot health care, epitomizing the essence of self-managed healthcare practices.

RevDate: 2025-02-20

Balamurugan M, Narayanan K, Raghu N, et al (2025)

Role of artificial intelligence in smart grid - a mini review.

Frontiers in artificial intelligence, 8:1551661.

A smart grid is a structure that regulates, operates, and utilizes energy sources that are incorporated into the smart grid using smart communications techniques and computerized techniques. The running and maintenance of Smart Grids now depend on artificial intelligence methods quite extensively. Artificial intelligence is enabling more dependable, efficient, and sustainable energy systems from improving load forecasting accuracy to optimizing power distribution and guaranteeing issue identification. An intelligent smart grid will be created by substituting artificial intelligence for manual tasks and achieving high efficiency, dependability, and affordability across the energy supply chain from production to consumption. Collection of a large diversity of data is vital to make effective decisions. Artificial intelligence application operates by processing abundant data samples, advanced computing, and strong communication collaboration. The development of appropriate infrastructure resources, including big data, cloud computing, and other collaboration platforms, must be enhanced for this type of operation. In this paper, an attempt has been made to summarize the artificial intelligence techniques used in various aspects of smart grid system.

RevDate: 2025-02-22

Zan T, Jia X, Guo X, et al (2025)

Research on variable-length control chart pattern recognition based on sliding window method and SECNN-BiLSTM.

Scientific reports, 15(1):5921.

Control charts, as essential tools in Statistical Process Control (SPC), are frequently used to analyze whether production processes are under control. Most existing control chart recognition methods target fixed-length data, failing to meet the needs of recognizing variable-length control charts in production. This paper proposes a variable-length control chart recognition method based on Sliding Window Method and SE-attention CNN and Bi-LSTM (SECNN-BiLSTM). A cloud-edge integrated recognition system was developed using wireless digital calipers, embedded devices, and cloud computing. Different length control chart data is transformed from one-dimensional to two-dimensional matrices using a sliding window approach and then fed into a deep learning network combining SE-attention CNN and Bi-LSTM. This network, inspired by residual structures, extracts multiple features to build a control chart recognition model. Simulations, the cloud-edge recognition system, and engineering applications demonstrate that this method efficiently and accurately recognizes variable-length control charts, establishing a foundation for more efficient pattern recognition.

RevDate: 2025-02-26
CmpDate: 2025-02-26

Pricope NG, EG Dalton (2025)

Mapping coastal resilience: Precision insights for green infrastructure suitability.

Journal of environmental management, 376:124511.

Addressing the need for effective flood risk mitigation strategies and enhanced urban resilience to climate change, we introduce a cloud-computed Green Infrastructure Suitability Index (GISI) methodology. This approach combines remote sensing and geospatial modeling to create a cloud-computed blend that synthesizes land cover classifications, biophysical variables, and flood exposure data to map suitability for green infrastructure (GI) implementation at both street and landscape levels. The GISI methodology provides a flexible and robust tool for urban planning, capable of accommodating diverse data inputs and adjustments, making it suitable for various geographic contexts. Applied within the Wilmington Urban Area Metropolitan Planning Organization (WMPO) in North Carolina, USA, our findings show that residential parcels, constituting approximately 91% of the total identified suitable areas, are optimally positioned for GI integration. This underscores the potential for embedding GI within developed residential urban landscapes to bolster ecosystem and community resilience. Our analysis indicates that 7.19% of the WMPO area is highly suitable for street-level GI applications, while 1.88% is ideal for landscape GI interventions, offering opportunities to enhance stormwater management and biodiversity at larger and more connected spatial scales. By identifying specific parcels with high suitability for GI, this research provides a comprehensive and transferable, data-driven foundation for local and regional planning efforts. The scalability and adaptability of the proposed modeling approach make it a powerful tool for informing sustainable urban development practices. Future work will focus on more spatially-resolved models of these areas and the exploration of GI's multifaceted benefits at the local level, aiming to guide the deployment of GI projects that align with broader environmental and social objectives.

RevDate: 2025-02-22
CmpDate: 2025-02-18

Bathelt F, Lorenz S, Weidner J, et al (2025)

Application of Modular Architectures in the Medical Domain - a Scoping Review.

Journal of medical systems, 49(1):27.

The healthcare sector is notable for its reliance on discrete, self-contained information systems, which are often characterised by the presence of disparate data silos. The growing demands for documentation, quality assurance, and secondary use of medical data for research purposes has underscored the necessity for solutions that are more flexible, straightforward to maintain and interoperable. In this context, modular systems have the potential to act as a catalyst for change, offering the capacity to encapsulate and combine functionalities in an adaptable manner. The objective of this scoping review is to determine the extent to which modular systems are employed in the medical field. The review will provide a detailed overview of the effectiveness of service-oriented or microservice architectures, the challenges that should be addressed during implementation, and the lessons that can be learned from countries with productive use of such modular architectures. The review shows a rise in the use of microservices, indicating a shift towards encapsulated autonomous functions. The implementation should use HL7 FHIR as communication standard, deploy RESTful interfaces and standard protocols for technical data exchange, and apply HIPAA security rule for security purposes. User involvement is essential, as is integrating services into existing workflows. Modular architectures can facilitate flexibility and scalability. However, there are well-documented performance issues associated with microservice architectures, namely a high communication demand. One potential solution to this problem may be to integrate modular architectures into a cloud computing environment, which would require further investigation.

RevDate: 2025-02-19

Kelliher JM, Xu Y, Flynn MC, et al (2024)

Standardized and accessible multi-omics bioinformatics workflows through the NMDC EDGE resource.

Computational and structural biotechnology journal, 23:3575-3583.

Accessible and easy-to-use standardized bioinformatics workflows are necessary to advance microbiome research from observational studies to large-scale, data-driven approaches. Standardized multi-omics data enables comparative studies, data reuse, and applications of machine learning to model biological processes. To advance broad accessibility of standardized multi-omics bioinformatics workflows, the National Microbiome Data Collaborative (NMDC) has developed the Empowering the Development of Genomics Expertise (NMDC EDGE) resource, a user-friendly, open-source web application (https://nmdc-edge.org). Here, we describe the design and main functionality of the NMDC EDGE resource for processing metagenome, metatranscriptome, natural organic matter, and metaproteome data. The architecture relies on three main layers (web application, orchestration, and execution) to ensure flexibility and expansion to future workflows. The orchestration and execution layers leverage best practices in software containers and accommodate high-performance computing and cloud computing services. Further, we have adopted a robust user research process to collect feedback for continuous improvement of the resource. NMDC EDGE provides an accessible interface for researchers to process multi-omics microbiome data using production-quality workflows to facilitate improved data standardization and interoperability.

RevDate: 2025-02-17

Dinpajooh M, Hightower GL, Overstreet RE, et al (2025)

On the stability constants of metal-nitrate complexes in aqueous solutions.

Physical chemistry chemical physics : PCCP [Epub ahead of print].

Stability constants of simple reactions involving addition of the NO3[-] ion to hydrated metal complexes, [M(H2O)x][n+] are calculated with a computational workflow developed using cloud computing resources. The computational workflow performs conformational searches for metal complexes at both low and high levels of theories in conjunction with a continuum solvation model (CSM). The low-level theory is mainly used for the initial conformational searches, which are complemented with high-level density functional theory conformational searches in the CSM framework to determine the coordination chemistry relevant for stability constant calculations. In this regard, the lowest energy conformations are found to obtain the reaction free energies for the addition of one NO3[-] to [M(H2O)x][n+] complexes, where M represents Fe(II), Fe(III), Sr(II), Ce(III), Ce(IV), and U(VI), respectively. Structural analysis of hundreds of optimized geometries at high-level theory reveals that NO3[-] coordinates with Fe(II) and Fe(III) in either a monodentate or bidentate manner. Interestingly, the lowest-energy conformations of Fe(II) metal-nitrate complexes exhibit monodentate or bidentate coordination with a coordination number of 6 while the bidentate seven-coordinated Fe(II) metal-nitrate complexes are approximately 2 kcal mol[-1] higher in energy. Notably, for Fe(III) metal-nitrate complexes, the bidentate seven-coordinated configuration is more stable than the six-coordinated Fe(II) complexes (monodentate or bidentate) by a few thermal energy units. In contrast, Sr(II), Ce(III), Ce(IV), and U(VI) metal ions predominantly coordinate with NO3[-] in a bidentate manner, exhibiting typical coordination numbers of 7, 9, 9, and 5, respectively. Stability constants are accordingly calculated using linear free energy approaches to account for the systematic errors and good agreements are obtained between the calculated stability constants and the available experimental data.

RevDate: 2025-02-18

Thilakarathne NN, Abu Bakar MS, Abas PE, et al (2025)

Internet of things enabled smart agriculture: Current status, latest advancements, challenges and countermeasures.

Heliyon, 11(3):e42136.

It is no wonder that agriculture plays a vital role in the development of some countries when their economies rely on agricultural activities and the production of food for human survival. Owing to the ever-increasing world population, estimated at 7.9 billion in 2022, feeding this number of people has become a concern due to the current rate of agricultural food production subjected to various reasons. The advent of the Internet of Things (IoT) based technologies in the 21st century has led to the reshaping of every industry, including agriculture, and has paved the way for smart agriculture, with the technology used towards automating and controlling most aspects of traditional agriculture. Smart agriculture, interchangeably known as smart farming, utilizes IoT and related enabling technologies such as cloud computing, artificial intelligence, and big data in agriculture and offers the potential to enhance agricultural operations by automating and making intelligent decisions, resulting in increased efficiency and a better yield with minimum waste. Consequently, most governments are spending more money and offering incentives to switch from traditional to smart agriculture. Nonetheless, the COVID-19 global pandemic served as a catalyst for change in the agriculture industry, driving a shift toward greater reliance on technology over traditional labor for agricultural tasks. In this regard, this research aims to synthesize the current knowledge of smart agriculture, highlighting its current status, main components, latest application areas, advanced agricultural practices, hardware and software used, success stores, potential challenges, and countermeasures to them, and future trends, for the growth of the industry as well as a reference to future research.

RevDate: 2025-02-14

Wyman A, Z Zhang (2025)

A Tutorial on the Use of Artificial Intelligence Tools for Facial Emotion Recognition in R.

Multivariate behavioral research [Epub ahead of print].

Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.

RevDate: 2025-02-13

Guturu H, Nichols A, Cantrell LS, et al (2025)

Cloud-Enabled Scalable Analysis of Large Proteomics Cohorts.

Journal of proteome research [Epub ahead of print].

Rapid advances in depth and throughput of untargeted mass-spectrometry-based proteomic technologies enable large-scale cohort proteomic and proteogenomic analyses. As such, the data infrastructure and search engines required to process data must also scale. This challenge is amplified in search engines that rely on library-free match between runs (MBR) search, which enable enhanced depth-per-sample and data completeness. However, to date, no MBR-based search could scale to process cohorts of thousands or more individuals. Here, we present a strategy to deploy search engines in a distributed cloud environment without source code modification, thereby enhancing resource scalability and throughput. Additionally, we present an algorithm, Scalable MBR, that replicates the MBR procedure of popular DIA-NN software for scalability to thousands of samples. We demonstrate that Scalable MBR can search thousands of MS raw files in a few hours compared to days required for the original DIA-NN MBR procedure and demonstrate that the results are almost indistinguishable to those of DIA-NN native MBR. We additionally show that empirical spectra generated by Scalable MBR better approximates DIA-NN native MBR compared to semiempirical alternatives such as ID-RT-IM MBR, preserving user choice to use empirical libraries in large cohort analysis. The method has been tested to scale to over 15,000 injections and is available for use in the Proteograph Analysis Suite.

RevDate: 2025-02-15

Li H, H Chung (2025)

Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI.

Sensors (Basel, Switzerland), 25(3):.

The rapid development of AI (artificial intelligence), sensor technology, high-speed Internet, and cloud computing has demonstrated the potential of data-driven approaches in structural health monitoring (SHM) within the field of structural engineering. Algorithms based on machine learning (ML) models are capable of discerning intricate structural behavioral patterns from real-time data gathered by sensors, thereby offering solutions to engineering quandaries in structural mechanics and SHM. This study presents an innovative approach based on AI and a fiber-reinforced polymer (FRP) double-helix sensor system for the prediction of forces acting on steel tube members in offshore wind turbine support systems; this enables structural health monitoring of the support system. The steel tube as the transitional member and the FRP double helix-sensor system were initially modeled in three dimensions using ABAQUS finite element software. Subsequently, the data obtained from the finite element analysis (FEA) were inputted into a fully connected neural network (FCNN) model, with the objective of establishing a nonlinear mapping relationship between the inputs (strain) and the outputs (reaction force). In the FCNN model, the impact of the number of input variables on the model's predictive performance is examined through cross-comparison of different combinations and positions of the six sets of input variables. And based on an evaluation of engineering costs and the number of strain sensors, a series of potential combinations of variables are identified for further optimization. Furthermore, the potential variable combinations were optimized using a convolutional neural network (CNN) model, resulting in optimal input variable combinations that achieved the accuracy level of more input variable combinations with fewer sensors. This not only improves the prediction performance of the model but also effectively controls the engineering cost. The model performance was evaluated using several metrics, including R[2], MSE, MAE, and SMAPE. The results demonstrated that the CNN model exhibited notable advantages in terms of fitting accuracy and computational efficiency when confronted with a limited data set. To provide further support for practical applications, an interactive graphical user interface (GUI)-based sensor-coupled mechanical prediction system for steel tubes was developed. This system enables engineers to predict the member forces of steel tubes in real time, thereby enhancing the efficiency and accuracy of SHM for offshore wind turbine support systems.

RevDate: 2025-02-15

Alboqmi R, RF Gamble (2025)

Enhancing Microservice Security Through Vulnerability-Driven Trust in the Service Mesh Architecture.

Sensors (Basel, Switzerland), 25(3):.

Cloud-native computing enhances the deployment of microservice architecture (MSA) applications by improving scalability and resilience, particularly in Beyond 5G (B5G) environments such as Sixth-Generation (6G) networks. This is achieved through the ability to replace traditional hardware dependencies with software-defined solutions. While service meshes enable secure communication for deployed MSAs, they struggle to identify vulnerabilities inherent to microservices. The reliance on third-party libraries and modules, essential for MSAs, introduces significant supply chain security risks. Implementing a zero-trust approach for MSAs requires robust mechanisms to continuously verify and monitor the software supply chain of deployed microservices. However, existing service mesh solutions lack runtime trust evaluation capabilities for continuous vulnerability assessment of third-party libraries and modules. This paper introduces a mechanism for continuous runtime trust evaluation of microservices, integrating vulnerability assessments within a service mesh to enhance the deployed MSA application. The proposed approach dynamically assigns trust scores to deployed microservices, rewarding secure practices such as timely vulnerability patching. It also enables the sharing of assessment results, enhancing mitigation strategies across the deployed MSA application. The mechanism is evaluated using the Train Ticket MSA, a complex open-source benchmark MSA application deployed with Docker containers, orchestrated using Kubernetes, and integrated with the Istio service mesh. Results demonstrate that the enhanced service mesh effectively supports dynamic trust evaluation based on the vulnerability posture of deployed microservices, significantly improving MSA security and paving the way for future self-adaptive solutions.

RevDate: 2025-02-15
CmpDate: 2025-02-13

Abushark YB, Hassan S, AI Khan (2025)

Optimized Adaboost Support Vector Machine-Based Encryption for Securing IoT-Cloud Healthcare Data.

Sensors (Basel, Switzerland), 25(3):.

The Internet of Things (IoT) connects various medical devices that enable remote monitoring, which can improve patient outcomes and help healthcare providers deliver precise diagnoses and better service to patients. However, IoT-based healthcare management systems face significant challenges in data security, such as maintaining a triad of confidentiality, integrity, and availability (CIA) and securing data transmission. This paper proposes a novel AdaBoost support vector machine (ASVM) based on the grey wolf optimization and international data encryption algorithm (ASVM-based GWO-IDEA) to secure medical data in an IoT-enabled healthcare system. The primary objective of this work was to prevent possible cyberattacks, unauthorized access, and tampering with the security of such healthcare systems. The proposed scheme encodes the healthcare data before transmitting them, protecting them from unauthorized access and other network vulnerabilities. The scheme was implemented in Python, and its efficiency was evaluated using a Kaggle-based public healthcare dataset. The performance of the model/scheme was evaluated with existing strategies in the context of effective security parameters, such as the confidentiality rate and throughput. When using the suggested methodology, the data transmission process was improved and achieved a high throughput of 97.86%, an improved resource utilization degree of 98.45%, and a high efficiency of 93.45% during data transmission.

RevDate: 2025-02-15

Mahedero Biot F, Fornes-Leal A, Vaño R, et al (2025)

A Novel Orchestrator Architecture for Deploying Virtualized Services in Next-Generation IoT Computing Ecosystems.

Sensors (Basel, Switzerland), 25(3):.

The Next-Generation IoT integrates diverse technological enablers, allowing the creation of advanced systems with increasingly complex requirements and maximizing the use of available IoT-edge-cloud resources. This paper introduces an orchestrator architecture for dynamic IoT scenarios, inspired by ETSI NFV MANO and Cloud Native principles, where distributed computing nodes often have unfixed and changing networking configurations. Unlike traditional approaches, this architecture also focuses on managing services across massively distributed mobile nodes, as demonstrated in the automotive use case presented. Apart from working as MANO framework, the proposed solution efficiently handles service lifecycle management in large fleets of vehicles without relying on public or static IP addresses for connectivity. Its modular, microservices-based approach ensures adaptability to emerging trends like Edge Native, WebAssembly and RISC-V, positioning it as a forward-looking innovation for IoT ecosystems.

RevDate: 2025-02-15
CmpDate: 2025-02-13

Khan FU, Shah IA, Jan S, et al (2025)

Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review.

Sensors (Basel, Switzerland), 25(3):.

This systematic literature review analyzes machine learning (ML)-based techniques for resource management in fog computing. Utilizing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, this paper focuses on ML and deep learning (DL) solutions. Resource management in the fog computing domain was thoroughly analyzed by identifying the key factors and constraints. A total of 68 research papers of extended versions were finally selected and included in this study. The findings highlight a strong preference for DL in addressing resource management challenges within a fog computing paradigm, i.e., 66% of the reviewed articles leveraged DL techniques, while 34% utilized ML. Key factors such as latency, energy consumption, task scheduling, and QoS are interconnected and critical for resource management optimization. The analysis reveals that latency, energy consumption, and QoS are the prime factors addressed in the literature on ML-based fog computing resource management. Latency is the most frequently addressed parameter, investigated in 77% of the articles, followed by energy consumption and task scheduling at 44% and 33%, respectively. Furthermore, according to our evaluation, an extensive range of challenges, i.e., computational resource and latency, scalability and management, data availability and quality, and model complexity and interpretability, are addressed by employing 73, 53, 45, and 46 ML/DL techniques, respectively.

RevDate: 2025-02-15

Ogwara NO, Petrova K, Yang MLB, et al (2025)

MINDPRES: A Hybrid Prototype System for Comprehensive Data Protection in the User Layer of the Mobile Cloud.

Sensors (Basel, Switzerland), 25(3):.

Mobile cloud computing (MCC) is a technological paradigm for providing services to mobile device (MD) users. A compromised MD may cause harm to both its user and to other MCC customers. This study explores the use of machine learning (ML) models and stochastic methods for the protection of Android MDs connected to the mobile cloud. To test the validity and feasibility of the proposed models and methods, the study adopted a proof-of-concept approach and developed a prototype system named MINDPRESS. The static component of MINDPRES assesses the risk of the apps installed on the MD. It uses a device-based ML model for static feature analysis and a cloud-based stochastic risk evaluator. The device-based hybrid component of MINDPRES monitors app behavior in real time. It deploys two ML models and functions as an intrusion detection and prevention system (IDPS). The performance evaluation results of the prototype showed that the accuracy achieved by the methods for static and hybrid risk evaluation compared well with results reported in recent work. Power consumption data indicated that MINDPRES did not create an overload. This study contributes a feasible and scalable framework for building distributed systems for the protection of the data and devices of MCC customers.

RevDate: 2025-02-15

Cabrera VE, Bewley J, Breunig M, et al (2025)

Data Integration and Analytics in the Dairy Industry: Challenges and Pathways Forward.

Animals : an open access journal from MDPI, 15(3):.

The dairy industry faces significant challenges in data integration and analysis, which are critical for informed decision-making, operational optimization, and sustainability. Data integration-combining data from diverse sources, such as herd management systems, sensors, and diagnostics-remains difficult due to the lack of standardization, infrastructure barriers, and proprietary concerns. This commentary explores these issues based on insights from a multidisciplinary group of stakeholders, including industry experts, researchers, and practitioners. Key challenges discussed include the absence of a national animal identification system in the US, high IT resource costs, reluctance to share data due to competitive disadvantages, and differences in global data handling practices. Proposed pathways forward include developing comprehensive data integration guidelines, enhancing farmer awareness through training programs, and fostering collaboration across industry, academia, and technology providers. Additional recommendations involve improving data exchange standards, addressing interoperability issues, and leveraging advanced technologies, such as artificial intelligence and cloud computing. Emphasis is placed on localized data integration solutions for farm-level benefits and broader research applications to advance sustainability, traceability, and profitability within the dairy supply chain. These outcomes provide a foundation for achieving streamlined data systems, enabling actionable insights, and fostering innovation in the dairy industry.

RevDate: 2025-02-11

Bhat SN, Jindal GD, GD Nagare (2024)

Development and Validation of Cloud-based Heart Rate Variability Monitor.

Journal of medical physics, 49(4):654-660.

CONTEXT: This article introduces a new cloud-based point-of-care system to monitor heart rate variability (HRV).

AIMS: Medical investigations carried out at dispensaries or hospitals impose substantial physiological and psychological stress (white coat effect), disrupting cardiovascular homeostasis, which can be taken care by point-of-care cloud computing system to facilitate secure patient monitoring.

SETTINGS AND DESIGN: The device employs MAX30102 sensor to collect peripheral pulse signal using photoplethysmography technique. The non-invasive design ensures patient compliance while delivering critical insights into Autonomic Nervous System activity. Preliminary validations indicate the system's potential to enhance clinical outcomes by supporting timely, data-driven therapeutic adjustments based on HRV metrics.

SUBJECTS AND METHODS: This article explores the system's development, functionality, and reliability. System designed is validated with peripheral pulse analyzer (PPA), a research product of electronics division, Bhabha Atomic Research Centre.

STATISTICAL ANALYSIS USED: The output of developed HRV monitor (HRVM) is compared using Pearson's correlation and Mann-Whitney U-test with output of PPA. Peak positions and spectrum values are validated using Pearson's correlation, mean error, standard deviation (SD) of error, and range of error. HRV parameters such as total power, mean, peak amplitude, and power in very low frequency, low frequency, and high frequency bands are validated using Mann-Whitney U-test.

RESULTS: Pearson's correlation for spectrum values has been found to be more than 0.97 in all the subjects. Mean error, SD of error, and range of error are found to be in acceptable range.

CONCLUSIONS: Statistical results validate the new HRVM system against PPA for use in cloud computing and point-of-care testing.

RevDate: 2025-02-08

He C, Zhao Z, Zhang X, et al (2025)

RotInv-PCT: Rotation-Invariant Point Cloud Transformer via feature separation and aggregation.

Neural networks : the official journal of the International Neural Network Society, 185:107223 pii:S0893-6080(25)00102-9 [Epub ahead of print].

The widespread use of point clouds has spurred the rapid development of neural networks for point cloud processing. A crucial property of these networks is maintaining consistent output results under random rotations of the input point cloud, namely, rotation invariance. The dominant approach achieves rotation invariance is to construct local coordinate systems for computing invariant local point cloud coordinates. However, this method neglects the relative pose relationships between local point cloud structures, leading to a decline in network performance. To address this limitation, we propose a novel Rotation-Invariant Point Cloud Transformer (RotInv-PCT). This method extracts the local abstract shape features of the point cloud using Local Reference Frames (LRFs) and explicitly computes the spatial relative pose features between local point clouds, both of which are proven to be rotation-invariant. Furthermore, to capture the long-range pose dependencies between points, we introduce an innovative Feature Aggregation Transformer (FAT) model, which seamlessly fuses the pose features with the shape features to obtain a globally rotation-invariant representation. Moreover, to manage large-scale point clouds, we utilize hierarchical random downsampling to gradually decrease the scale of point clouds, followed by feature aggregation through FAT. To demonstrate the effectiveness of RotInv-PCT, we conducted comparative experiments across various tasks and datasets, including point cloud classification on ScanObjectNN and ModelNet40, part segmentation on ShapeNet, and semantic segmentation on S3DIS and KITTI. Thanks to our provable rotation-invariant features and FAT, our method generally outperforms state-of-the-art networks. In particular, we highlight that RotInv-PCT achieved a 2% improvement in real-world point cloud classification tasks compared to the strongest baseline. Furthermore, in the semantic segmentation task, we improved the performance on the S3DIS dataset by 10% and, for the first time, realized rotation-invariant point cloud semantic segmentation on the KITTI dataset.

RevDate: 2025-02-08

Nantakeeratipat T, Apisaksirikul N, Boonrojsaree B, et al (2024)

Automated machine learning for image-based detection of dental plaque on permanent teeth.

Frontiers in dental medicine, 5:1507705.

INTRODUCTION: To detect dental plaque, manual assessment and plaque-disclosing dyes are commonly used. However, they are time-consuming and prone to human error. This study aims to investigate the feasibility of using Google Cloud's Vertex artificial intelligence (AI) automated machine learning (AutoML) to develop a model for detecting dental plaque levels on permanent teeth using undyed photographic images.

METHODS: Photographic images of both undyed and corresponding erythrosine solution-dyed upper anterior permanent teeth from 100 dental students were captured using a smartphone camera. All photos were cropped to individual tooth images. Dyed images were analyzed to classify plaque levels based on the percentage of dyed surface area: mild (<30%), moderate (30%-60%), and heavy (>60%) categories. These true labels were used as the ground truth for undyed images. Two AutoML models, a three-class model (mild, moderate, heavy plaque) and a two-class model (acceptable vs. unacceptable plaque), were developed using undyed images in Vertex AI environment. Both models were evaluated based on precision, recall, and F1-score.

RESULTS: The three-class model achieved an average precision of 0.907, with the highest precision (0.983) in the heavy plaque category. Misclassifications were more common in the mild and moderate categories. The two-class acceptable-unacceptable model demonstrated improved performance with an average precision of 0.964 and an F1-score of 0.931.

CONCLUSION: This study demonstrated the potential of Vertex AI AutoML for non-invasive detection of dental plaque. While the two-class model showed promise for clinical use, further studies with larger datasets are recommended to enhance model generalization and real-world applicability.

RevDate: 2025-02-06

Saadati S, Sepahvand A, M Razzazi (2025)

Cloud and IoT based smart agent-driven simulation of human gait for detecting muscles disorder.

Heliyon, 11(2):e42119.

Motion disorders affect a significant portion of the global population. While some symptoms can be managed with medications, these treatments often impact all muscles uniformly, not just the affected ones, leading to potential side effects including involuntary movements, confusion, and decreased short-term memory. Currently, there is no dedicated application for differentiating healthy muscles from abnormal ones. Existing analysis applications, designed for other purposes, often lack essential software engineering features such as a user-friendly interface, infrastructure independence, usability and learning ability, cloud computing capabilities, and AI-based assistance. This research proposes a computer-based methodology to analyze human motion and differentiate between healthy and unhealthy muscles. First, an IoT-based approach is proposed to digitize human motion using smartphones instead of hardly accessible wearable sensors and markers. The motion data is then simulated to analyze the neuromusculoskeletal system. An agent-driven modeling method ensures the naturalness, accuracy, and interpretability of the simulation, incorporating neuromuscular details such as Henneman's size principle, action potentials, motor units, and biomechanical principles. The results are then provided to medical and clinical experts to aid in differentiating between healthy and unhealthy muscles and for further investigation. Additionally, a deep learning-based ensemble framework is proposed to assist in the analysis of the simulation results, offering both accuracy and interpretability. A user-friendly graphical interface enhances the application's usability. Being fully cloud-based, the application is infrastructure-independent and can be accessed on smartphones, PCs, and other devices without installation. This strategy not only addresses the current challenges in treating motion disorders but also paves the way for other clinical simulations by considering both scientific and computational requirements.

RevDate: 2025-02-07

Papudeshi B, Roach MJ, Mallawaarachchi V, et al (2025)

Sphae: an automated toolkit for predicting phage therapy candidates from sequencing data.

Bioinformatics advances, 5(1):vbaf004.

MOTIVATION: Phage therapy offers a viable alternative for bacterial infections amid rising antimicrobial resistance. Its success relies on selecting safe and effective phage candidates that require comprehensive genomic screening to identify potential risks. However, this process is often labor intensive and time-consuming, hindering rapid clinical deployment.

RESULTS: We developed Sphae, an automated bioinformatics pipeline designed to streamline the therapeutic potential of a phage in under 10 minutes. Using Snakemake workflow manager, Sphae integrates tools for quality control, assembly, genome assessment, and annotation tailored specifically for phage biology. Sphae automates the detection of key genomic markers, including virulence factors, antimicrobial resistance genes, and lysogeny indicators such as integrase, recombinase, and transposase, which could preclude therapeutic use. Among the 65 phage sequences analyzed, 28 showed therapeutic potential, 8 failed due to low sequencing depth, 22 contained prophage or virulent markers, and 23 had multiple phage genomes. This workflow produces a report to assess phage safety and therapy suitability quickly. Sphae is scalable and portable, facilitating efficient deployment across most high-performance computing and cloud platforms, accelerating the genomic evaluation process.

Sphae source code is freely available at https://github.com/linsalrob/sphae, with installation supported on Conda, PyPi, Docker containers.

RevDate: 2025-02-04

Bensaid R, Labraoui N, Abba Ari AA, et al (2024)

SA-FLIDS: secure and authenticated federated learning-based intelligent network intrusion detection system for smart healthcare.

PeerJ. Computer science, 10:e2414.

Smart healthcare systems are gaining increased practicality and utility, driven by continuous advancements in artificial intelligence technologies, cloud and fog computing, and the Internet of Things (IoT). However, despite these transformative developments, challenges persist within IoT devices, encompassing computational constraints, storage limitations, and attack vulnerability. These attacks target sensitive health information, compromise data integrity, and pose obstacles to the overall resilience of the healthcare sector. To address these vulnerabilities, Network-based Intrusion Detection Systems (NIDSs) are crucial in fortifying smart healthcare networks and ensuring secure use of IoMT-based applications by mitigating security risks. Thus, this article proposes a novel Secure and Authenticated Federated Learning-based NIDS framework using Blockchain (SA-FLIDS) for fog-IoMT-enabled smart healthcare systems. Our research aims to improve data privacy and reduce communication costs. Furthermore, we also address weaknesses in decentralized learning systems, like Sybil and Model Poisoning attacks. We leverage the blockchain-based Self-Sovereign Identity (SSI) model to handle client authentication and secure communication. Additionally, we use the Trimmed Mean method to aggregate data. This helps reduce the effect of unusual or malicious inputs when creating the overall model. Our approach is evaluated on real IoT traffic datasets such as CICIoT2023 and EdgeIIoTset. It demonstrates exceptional robustness against adversarial attacks. These findings underscore the potential of our technique to improve the security of IoMT-based healthcare applications.

RevDate: 2025-02-04

Hoang TH, Fuhrman J, Klarqvist M, et al (2025)

Enabling end-to-end secure federated learning in biomedical research on heterogeneous computing environments with APPFLx.

Computational and structural biotechnology journal, 28:29-39.

Facilitating large-scale, cross-institutional collaboration in biomedical machine learning (ML) projects requires a trustworthy and resilient federated learning (FL) environment to ensure that sensitive information such as protected health information is kept confidential. Specifically designed for this purpose, this work introduces APPFLx - a low-code, easy-to-use FL framework that enables easy setup, configuration, and running of FL experiments. APPFLx removes administrative boundaries of research organizations and healthcare systems while providing secure end-to-end communication, privacy-preserving functionality, and identity management. Furthermore, it is completely agnostic to the underlying computational infrastructure of participating clients, allowing an instantaneous deployment of this framework into existing computing infrastructures. Experimentally, the utility of APPFLx is demonstrated in two case studies: (1) predicting participant age from electrocardiogram (ECG) waveforms, and (2) detecting COVID-19 disease from chest radiographs. Here, ML models were securely trained across heterogeneous computing resources, including a combination of on-premise high-performance computing and cloud computing facilities. By securely unlocking data from multiple sources for training without directly sharing it, these FL models enhance generalizability and performance compared to centralized training models while ensuring data remains protected. In conclusion, APPFLx demonstrated itself as an easy-to-use framework for accelerating biomedical studies across organizations and healthcare systems on large datasets while maintaining the protection of private medical data.

RevDate: 2025-02-04

Zheng X, Z Weng (2025)

Design of an enhanced feature point matching algorithm utilizing 3D laser scanning technology for sculpture design.

PeerJ. Computer science, 11:e2628.

As the aesthetic appreciation for art continues to grow, there is an increased demand for precision and detailed control in sculptural works. The advent of 3D laser scanning technology introduces transformative new tools and methodologies for refining correction systems in sculpture design. This article proposes a feature point matching algorithm based on fragment measurement and the iterative closest point (ICP) methodology, leveraging 3D laser scanning technology, namely Fragment Measurement Iterative Closest Point Feature Point Matching (FM-ICP-FPM). The FM-ICP-FPM approach uses the overlapping area of the two sculpture perspectives as a reference for attaching feature points. It employs the 3D measurement system to capture physical point cloud data from the two surfaces to enable the initial alignment of feature points. Feature vectors are generated by segmenting the region around the feature points and computing the intra-block gradient histogram. Subsequently, distance threshold conditions are set based on the constructed feature vectors and the preliminary feature point matches established during the coarse alignment to achieve precise feature point matching. Experimental results demonstrate the exceptional performance of the FM-ICP-FPM algorithm, achieving a sampling interval of 200. The correct matching rate reaches an impressive 100%, while the mean translation error (MTE) is a mere 154 mm, and the mean rotation angle error (MRAE) is 0.065 degrees. The indicator represents the degree of deviation in translation and rotation of the registered model, respectively. These low error values demonstrate that the FM-ICP-FPM algorithm excels in registration accuracy and can generate highly consistent three-dimensional models.

RevDate: 2025-02-04

Alrowais F, Arasi MA, Alotaibi SS, et al (2025)

Deep gradient reinforcement learning for music improvisation in cloud computing framework.

PeerJ. Computer science, 11:e2265.

Artificial intelligence (AI) in music improvisation offers promising new avenues for developing human creativity. The difficulty of writing dynamic, flexible musical compositions in real time is discussed in this article. We explore using reinforcement learning (RL) techniques to create more interactive and responsive music creation systems. Here, the musical structures train an RL agent to navigate the complex space of musical possibilities to provide improvisations. The melodic framework in the input musical data is initially identified using bi-directional gated recurrent units. The lyrical concepts such as notes, chords, and rhythms from the recognised framework are transformed into a format suitable for RL input. The deep gradient-based reinforcement learning technique used in this research formulates a reward system that directs the agent to compose aesthetically intriguing and harmonically cohesive musical improvisations. The improvised music is further rendered in the MIDI format. The Bach Chorales dataset with six different attributes relevant to musical compositions is employed in implementing the present research. The model was set up in a containerised cloud environment and controlled for smooth load distribution. Five different parameters, such as pitch frequency (PF), standard pitch delay (SPD), average distance between peaks (ADP), note duration gradient (NDG) and pitch class gradient (PCG), are leveraged to assess the quality of the improvised music. The proposed model obtains +0.15 of PF, -0.43 of SPD, -0.07 of ADP and 0.0041 NDG, which is a better value than other improvisation methods.

RevDate: 2025-02-27

Gadde RSK, Devaguptam S, Ren F, et al (2025)

Chatbot-assisted quantum chemistry for explicitly solvated molecules.

Chemical science, 16(9):3852-3864.

Advanced computational chemistry software packages have transformed chemical research by leveraging quantum chemistry and molecular simulations. Despite their capabilities, the complicated design and the requirement for specialized computing hardware hinder their applications in the broad chemistry community. Here, we introduce AutoSolvateWeb, a chatbot-assisted computational platform that addresses both challenges simultaneously. This platform employs a user-friendly chatbot interface to guide non-experts through a multistep procedure involving various computational packages, enabling them to configure and execute complex quantum mechanical/molecular mechanical (QM/MM) simulations of explicitly solvated molecules. Moreover, this platform operates on cloud infrastructure, allowing researchers to run simulations without hardware configuration challenges. As a proof of concept, AutoSolvateWeb demonstrates that combining virtual agents with cloud computing can democratize access to sophisticated computational research tools.

RevDate: 2025-01-31

Rateb R, Hadi AA, Tamanampudi VM, et al (2025)

An optimal workflow scheduling in IoT-fog-cloud system for minimizing time and energy.

Scientific reports, 15(1):3607.

Today, with the increasing use of the Internet of Things (IoT) in the world, various workflows that need to be stored and processed on the computing platforms. But this issue, causes an increase in costs for computing resources providers, and as a result, system Energy Consumption (EC) is also reduced. Therefore, this paper examines the workflow scheduling problem of IoT devices in the fog-cloud environment, where reducing the EC of the computing system and reducing the MakeSpan Time (MST) of workflows as main objectives, under the constraints of priority, deadline and reliability. Therefore, in order to achieve these objectives, the combination of Aquila and Salp Swarm Algorithms (ASSA) is used to select the best Virtual Machines (VMs) for the execution of workflows. So, in each iteration of ASSA execution, a number of VMs are selected by the ASSA. Then by using the Reducing MakeSpan Time (RMST) technique, the MST of the workflow on selected VMs is reduced, while maintaining reliability and deadline. Then, using VM merging and Dynamic Voltage Frequency Scaling (DVFS) technique on the output from RMST, the static and dynamic EC is reduced, respectively. Experimental results show the effectiveness of the proposed method compared to previous methods.

RevDate: 2025-02-19
CmpDate: 2025-02-19

Bai Y, Zhao H, Shi X, et al (2025)

Towards practical and privacy-preserving CNN inference service for cloud-based medical imaging analysis: A homomorphic encryption-based approach.

Computer methods and programs in biomedicine, 261:108599.

BACKGROUND AND OBJECTIVE: Cloud-based Deep Learning as a Service (DLaaS) has transformed biomedicine by enabling healthcare systems to harness the power of deep learning for biomedical data analysis. However, privacy concerns emerge when sensitive user data must be transmitted to untrusted cloud servers. Existing privacy-preserving solutions are hindered by significant latency issues, stemming from the computational complexity of inner product operations in convolutional layers and the high communication costs of evaluating nonlinear activation functions. These limitations make current solutions impractical for real-world applications.

METHODS: In this paper, we address the challenges in mobile cloud-based medical imaging analysis, where users aim to classify private body-related radiological images using a Convolutional Neural Network (CNN) model hosted on a cloud server while ensuring data privacy for both parties. We propose PPCNN, a practical and privacy-preserving framework for CNN Inference. It introduces a novel mixed protocol that combines a low-expansion homomorphic encryption scheme with the noise-based masking method. Our framework is designed based on three key ideas: (1) optimizing computation costs by shifting unnecessary and expensive homomorphic multiplication operations to the offline phase, (2) introducing a coefficient-aware packing method to enable efficient homomorphic operations during the linear layer of the CNN, and (3) employing data masking techniques for nonlinear operations of the CNN to reduce communication costs.

RESULTS: We implemented PPCNN and evaluated its performance on three real-world radiological image datasets. Experimental results show that PPCNN outperforms state-of-the-art methods in mobile cloud scenarios, achieving superior response times and lower usage costs.

CONCLUSIONS: This study introduces an efficient and privacy-preserving framework for cloud-based medical imaging analysis, marking a significant step towards practical, secure, and trustworthy AI-driven healthcare solutions.

RevDate: 2025-02-14
CmpDate: 2025-01-28

Oh S, S Lee (2025)

Rehabilomics Strategies Enabled by Cloud-Based Rehabilitation: Scoping Review.

Journal of medical Internet research, 27:e54790.

BACKGROUND: Rehabilomics, or the integration of rehabilitation with genomics, proteomics, metabolomics, and other "-omics" fields, aims to promote personalized approaches to rehabilitation care. Cloud-based rehabilitation offers streamlined patient data management and sharing and could potentially play a significant role in advancing rehabilomics research. This study explored the current status and potential benefits of implementing rehabilomics strategies through cloud-based rehabilitation.

OBJECTIVE: This scoping review aimed to investigate the implementation of rehabilomics strategies through cloud-based rehabilitation and summarize the current state of knowledge within the research domain. This analysis aims to understand the impact of cloud platforms on the field of rehabilomics and provide insights into future research directions.

METHODS: In this scoping review, we systematically searched major academic databases, including CINAHL, Embase, Google Scholar, PubMed, MEDLINE, ScienceDirect, Scopus, and Web of Science to identify relevant studies and apply predefined inclusion criteria to select appropriate studies. Subsequently, we analyzed 28 selected papers to identify trends and insights regarding cloud-based rehabilitation and rehabilomics within this study's landscape.

RESULTS: This study reports the various applications and outcomes of implementing rehabilomics strategies through cloud-based rehabilitation. In particular, a comprehensive analysis was conducted on 28 studies, including 16 (57%) focused on personalized rehabilitation and 12 (43%) on data security and privacy. The distribution of articles among the 28 studies based on specific keywords included 3 (11%) on the cloud, 4 (14%) on platforms, 4 (14%) on hospitals and rehabilitation centers, 5 (18%) on telehealth, 5 (18%) on home and community, and 7 (25%) on disease and disability. Cloud platforms offer new possibilities for data sharing and collaboration in rehabilomics research, underpinning a patient-centered approach and enhancing the development of personalized therapeutic strategies.

CONCLUSIONS: This scoping review highlights the potential significance of cloud-based rehabilomics strategies in the field of rehabilitation. The use of cloud platforms is expected to strengthen patient-centered data management and collaboration, contributing to the advancement of innovative strategies and therapeutic developments in rehabilomics.

RevDate: 2025-01-30

Roth I, O Cohen (2025)

The use of an automatic remote weight management system to track treatment response, identified drugs supply shortage and its consequences: A pilot study.

Digital health, 11:20552076251314090.

OBJECTIVE: The objective of this pilot study is to evaluate the feasibility of using an automatic weight management system to follow patients' response to weight reduction medications and to identify early deviations from weight trajectories.

METHODS: The pilot study involved 11 participants using Semaglutide for weight management, monitored over a 12-month period. A cloud-based, Wi-Fi-enabled remote weight management system collected and analyzed daily weight data from smart scales. The system's performance was evaluated during a period marked by a Semaglutide supply shortage.

RESULTS: Participants achieved a cumulative weight loss of 85 kg until a supply shortage-induced trough in October 2022. This was followed by a 6-8 week plateau and a subsequent 13 kg cumulative weight gain. The study demonstrated the feasibility of digitally monitoring weight without attrition over 12 months and highlighted the impact of anti-obesity drug (AOD) supply constraints on weight trajectories.

CONCLUSIONS: The remote weight management system proved important for improving clinic efficacy and identifying trends impacting obesity outcomes through electronic data monitoring. The system's potential in increasing medication compliance and enhancing overall clinical outcomes warrants further research, particularly in light of the challenges posed by AOD supply fluctuations.

RevDate: 2025-03-01
CmpDate: 2025-03-01

Fang C, Song K, Yan Z, et al (2025)

Monitoring phycocyanin in global inland waters by remote sensing: Progress and future developments.

Water research, 275:123176.

Cyanobacterial blooms are increasingly becoming major threats to global inland aquatic ecosystems. Phycocyanin (PC), a pigment unique to cyanobacteria, can provide important reference for the study of cyanobacterial blooms warning. New satellite technology and cloud computing platforms have greatly improved research on PC, with the average number of studies examining it having increased from 5 per year before 2018 to 17 per year thereafter. Many empirical, semi-empirical, semi-analytical, quasi-analytical algorithm (QAA) and machine learning (ML) algorithms have been developed based on unique absorption characteristics of PC at approximately 620 nm. However, most models have been developed for individual lakes or clusters of them in specific regions, and their applicability at greater spatial scales requires evaluation. A review of optical mechanisms, principles and advantages and disadvantages of different model types, performance advantages and disadvantages of mainstream sensors in PC remote sensing inversion, and an evaluation of global lacustrine PC datasets is needed. We examine 230 articles from the Web of Science citation database between 1900 and 2024, summarize 57 of them that deal with construction of PC inversion models, and compile a list of 6526 PC sampling sites worldwide. This review proposed the key to achieving global lacustrine PC remote sensing inversion and spatiotemporal evolution analysis is to fully use existing multi-source remote sensing big data platforms, and a deep combination of ML and optical mechanisms, to classify the object lakes in advance based on lake optical characteristics, eutrophication level, water depth, climate type, altitude, population density within the watershed. Additionally, integrating data from multi-source satellite sensors, ground-based observations, and unmanned aerial vehicles, will enable future development of global lacustrine PC remote estimation, and contribute to achieving United Nations Sustainable Development Goals inland water goals.

RevDate: 2025-01-30

Mennilli R, Mazza L, A Mura (2025)

Integrating Machine Learning for Predictive Maintenance on Resource-Constrained PLCs: A Feasibility Study.

Sensors (Basel, Switzerland), 25(2):.

This study investigates the potential of deploying a neural network model on an advanced programmable logic controller (PLC), specifically the Finder Optaâ„¢, for real-time inference within the predictive maintenance framework. In the context of Industry 4.0, edge computing aims to process data directly on local devices rather than relying on a cloud infrastructure. This approach minimizes latency, enhances data security, and reduces the bandwidth required for data transmission, making it ideal for industrial applications that demand immediate response times. Despite the limited memory and processing power inherent to many edge devices, this proof-of-concept demonstrates the suitability of the Finder Optaâ„¢ for such applications. Using acoustic data, a convolutional neural network (CNN) is deployed to infer the rotational speed of a mechanical test bench. The findings underscore the potential of the Finder Optaâ„¢ to support scalable and efficient predictive maintenance solutions, laying the groundwork for future research in real-time anomaly detection. By enabling machine learning capabilities on compact, resource-constrained hardware, this approach promises a cost-effective, adaptable solution for diverse industrial environments.

RevDate: 2025-01-28

Gu X, Duan Z, Ye G, et al (2025)

Virtual Node-Driven Cloud-Edge Collaborative Resource Scheduling for Surveillance with Visual Sensors.

Sensors (Basel, Switzerland), 25(2):.

For public security purposes, distributed surveillance systems are widely deployed in key areas. These systems comprise visual sensors, edge computing boxes, and cloud servers. Resource scheduling algorithms are critical to ensure such systems' robustness and efficiency. They balance workloads and need to meet real-time monitoring and emergency response requirements. Existing works have primarily focused on optimizing Quality of Service (QoS), latency, and energy consumption in edge computing under resource constraints. However, the issue of task congestion due to insufficient physical resources has been rarely investigated. In this paper, we tackle the challenges posed by large workloads and limited resources in the context of surveillance with visual sensors. First, we introduce the concept of virtual nodes for managing resource shortages, referred to as virtual node-driven resource scheduling. Then, we propose a convex-objective integer linear programming (ILP) model based on this concept and demonstrate its efficiency. Additionally, we propose three alternative virtual node-driven scheduling algorithms, the extension of a random algorithm, a genetic algorithm, and a heuristic algorithm, respectively. These algorithms serve as benchmarks for comparison with the proposed ILP model. Experimental results show that all the scheduling algorithms can effectively address the challenge of offloading multiple priority tasks under resource constraints. Furthermore, the ILP model shows the best scheduling performance among them.

RevDate: 2025-01-30
CmpDate: 2025-01-24

Alsahfi T, Badshah A, Aboulola OI, et al (2025)

Optimizing healthcare big data performance through regional computing.

Scientific reports, 15(1):3129.

The healthcare sector is experiencing a digital transformation propelled by the Internet of Medical Things (IOMT), real-time patient monitoring, robotic surgery, Electronic Health Records (EHR), medical imaging, and wearable technologies. This proliferation of digital tools generates vast quantities of healthcare data. Efficient and timely analysis of this data is critical for enhancing patient outcomes and optimizing care delivery. Real-time processing of Healthcare Big Data (HBD) offers significant potential for improved diagnostics, continuous monitoring, and effective surgical interventions. However, conventional cloud-based processing systems face challenges due to the sheer volume and time-sensitive nature of this data. The migration of large datasets to centralized cloud infrastructures often results in latency, which impedes real-time applications. Furthermore, network congestion exacerbates these challenges, delaying access to vital insights necessary for informed decision-making. Such limitations hinder healthcare professionals from fully leveraging the capabilities of emerging technologies and big data analytics. To mitigate these issues, this paper proposes a Regional Computing (RC) paradigm for the management of HBD. The RC framework establishes strategically positioned regional servers capable of regionally collecting, processing, and storing medical data, thereby reducing dependence on centralized cloud resources, especially during peak usage periods. This innovative approach effectively addresses the constraints of traditional cloud processing, facilitating real-time data analysis at the regional level. Ultimately, it empowers healthcare providers with the timely information required to deliver data-driven, personalized care and optimize treatment strategies.

RevDate: 2025-01-28

Tang Y, Guo M, Li B, et al (2024)

Flexible Threshold Quantum Homomorphic Encryption on Quantum Networks.

Entropy (Basel, Switzerland), 27(1):.

Currently, most quantum homomorphic encryption (QHE) schemes only allow a single evaluator (server) to accomplish computation tasks on encrypted data shared by the data owner (user). In addition, the quantum computing capability of the evaluator and the scope of quantum computation it can perform are usually somewhat limited, which significantly reduces the flexibility of the scheme in quantum network environments. In this paper, we propose a novel (t,n)-threshold QHE (TQHE) network scheme based on the Shamir secret sharing protocol, which allows k(t≤k≤n) evaluators to collaboratively perform evaluation computation operations on each qubit within the shared encrypted sequence. Moreover, each evaluator, while possessing the ability to perform all single-qubit unitary operations, is able to perform arbitrary single-qubit gate computation task assigned by the data owner. We give a specific (3, 5)-threshold example, illustrating the scheme's correctness and feasibility, and simulate it on IBM quantum computing cloud platform. Finally, it is shown that the scheme is secure by analyzing encryption/decryption private keys, ciphertext quantum state sequences during transmission, plaintext quantum state sequence, and the result after computations on the plaintext quantum state sequence.

RevDate: 2025-02-10
CmpDate: 2025-02-06

Kwon K, Lee YJ, Chung S, et al (2025)

Full Body-Worn Textile-Integrated Nanomaterials and Soft Electronics for Real-Time Continuous Motion Recognition Using Cloud Computing.

ACS applied materials & interfaces, 17(5):7977-7988.

Recognizing human body motions opens possibilities for real-time observation of users' daily activities, revolutionizing continuous human healthcare and rehabilitation. While some wearable sensors show their capabilities in detecting movements, no prior work could detect full-body motions with wireless devices. Here, we introduce a soft electronic textile-integrated system, including nanomaterials and flexible sensors, which enables real-time detection of various full-body movements using the combination of a wireless sensor suit and deep-learning-based cloud computing. This system includes an array of a nanomembrane, laser-induced graphene strain sensors, and flexible electronics integrated with textiles for wireless detection of different body motions and workouts. With multiple human subjects, we demonstrate the system's performance in real-time prediction of eight different activities, including resting, walking, running, squatting, walking upstairs, walking downstairs, push-ups, and jump roping, with an accuracy of 95.3%. The class of technologies, integrated as full body-worn textile electronics and interactive pairing with smartwatches and portable devices, can be used in real-world applications such as ambulatory health monitoring via conjunction with smartwatches and feedback-enabled customized rehabilitation workouts.

RevDate: 2025-02-12
CmpDate: 2025-02-12

Novais JJM, Melo BMD, Neves Junior AF, et al (2025)

Online analysis of Amazon's soils through reflectance spectroscopy and cloud computing can support policies and the sustainable development.

Journal of environmental management, 375:124155.

Analyzing soil in large and remote areas such as the Amazon River Basin (ARB) is unviable when it is entirely performed by wet labs using traditional methods due to the scarcity of labs and the significant workforce requirements, increasing costs, time, and waste. Remote sensing, combined with cloud computing, enhances soil analysis by modeling soil from spectral data and overcoming the limitations of traditional methods. We verified the potential of soil spectroscopy in conjunction with cloud-based computing to predict soil organic carbon (SOC) and particle size (sand, silt, and clay) content from the Amazon region. To this end, we request physicochemical attribute values determined by wet laboratory analyses of 211 soil samples from the ARB. These samples were submitted to spectroscopy Vis-NIR-SWIR in the laboratory. Two approaches modeled the soil attributes: M-I) cloud-computing-based using the Brazilian Soil Spectral Service (BraSpecS) platform, and M-II) computing-based in an offline environment using R programming language. Both methods used the Cubist machine learning algorithm for modeling. The coefficient of determination (R[2]), mean absolute error (MAE) and root mean squared error (RMSE) served as criteria for performance assessment. The soil attributes prediction was highly consistent, considering the measured and predicted by both approaches M-I and M-II. The M-II outperformed the M-I in predicting both particle size and SOC. For clay content, the offline model achieved an R[2] of 0.85, with an MAE of 86.16 g kg[-][1] and RMSE of 111.73 g kg[-][1], while the online model had an R[2] of 0.70, MAE of 111.73 g kg[-][1], and RMSE of 144.19 g kg[-][1]. For SOC, the offline model also showed better performance, with an R[2] of 0.81, MAE of 3.42 g kg[-][1], and RMSE of 4.57 g kg[-][1], compared to an R[2] of 0.72, MAE of 3.66 g kg[-][1], and RMSE of 5.53 g kg[-][1] for the M-I. Both modeling methods demonstrated the power of reflectance spectroscopy and cloud computing to survey soils in remote and large areas such as ARB. The synergetic use of these techniques can support policies and sustainable development.

RevDate: 2025-02-10
CmpDate: 2025-01-23

Seth M, Jalo H, Högstedt Å, et al (2025)

Technologies for Interoperable Internet of Medical Things Platforms to Manage Medical Emergencies in Home and Prehospital Care: Scoping Review.

Journal of medical Internet research, 27:e54470.

BACKGROUND: The aging global population and the rising prevalence of chronic disease and multimorbidity have strained health care systems, driving the need for expanded health care resources. Transitioning to home-based care (HBC) may offer a sustainable solution, supported by technological innovations such as Internet of Medical Things (IoMT) platforms. However, the full potential of IoMT platforms to streamline health care delivery is often limited by interoperability challenges that hinder communication and pose risks to patient safety. Gaining more knowledge about addressing higher levels of interoperability issues is essential to unlock the full potential of IoMT platforms.

OBJECTIVE: This scoping review aims to summarize best practices and technologies to overcome interoperability issues in IoMT platform development for prehospital care and HBC.

METHODS: This review adheres to a protocol published in 2022. Our literature search followed a dual search strategy and was conducted up to August 2023 across 6 electronic databases: IEEE Xplore, PubMed, Scopus, ACM Digital Library, Sage Journals, and ScienceDirect. After the title, abstract, and full-text screening performed by 2 reviewers, 158 articles were selected for inclusion. To answer our 2 research questions, we used 2 models defined in the protocol: a 6-level interoperability model and a 5-level IoMT reference model. Data extraction and synthesis were conducted through thematic analysis using Dedoose. The findings, including commonly used technologies and standards, are presented through narrative descriptions and graphical representations.

RESULTS: The primary technologies and standards reported for interoperable IoMT platforms in prehospital care and HBC included cloud computing (19/30, 63%), representational state transfer application programming interfaces (REST APIs; 17/30, 57%), Wi-Fi (17/30, 57%), gateways (15/30, 50%), and JSON (14/30, 47%). Message queuing telemetry transport (MQTT; 7/30, 23%) and WebSocket (7/30, 23%) were commonly used for real-time emergency alerts, while fog and edge computing were often combined with cloud computing for enhanced processing power and reduced latencies. By contrast, technologies associated with higher interoperability levels, such as blockchain (2/30, 7%), Kubernetes (3/30, 10%), and openEHR (2/30, 7%), were less frequently reported, indicating a focus on lower level of interoperability in most of the included studies (17/30, 57%).

CONCLUSIONS: IoMT platforms that support higher levels of interoperability have the potential to deliver personalized patient care, enhance overall patient experience, enable early disease detection, and minimize time delays. However, our findings highlight a prevailing emphasis on lower levels of interoperability within the IoMT research community. While blockchain, microservices, Docker, and openEHR are described as suitable solutions in the literature, these technologies seem to be seldom used in IoMT platforms for prehospital care and HBC. Recognizing the evident benefit of cross-domain interoperability, we advocate a stronger focus on collaborative initiatives and technologies to achieve higher levels of interoperability.

RR2-10.2196/40243.

RevDate: 2025-01-29

Ali A, Hussain B, Hissan RU, et al (2025)

Examining the landscape transformation and temperature dynamics in Pakistan.

Scientific reports, 15(1):2575.

This study aims to examine the landscape transformation and temperature dynamics using multiple spectral indices. The processes of temporal fluctuations in the land surface temperature is strongly related to the morphological features of the area in which the temperature is determined, and the given factors significantly affect the thermal properties of the surface. This research is being conducted in Pakistan to identify the vegetation cover, water bodies, impervious surfaces, and land surface temperature using decadal remote sensing data with four intervals during 1993-2023 in the Mardan division, Khyber Pakhtunkhwa. To analyze the landscape transformation and temperature dynamics, the study used spectral indices including Land Surface Temperature, Normalized Difference Vegetation Index, Normalized Difference Water Index, Normalized Difference Built-up Index, and Normalized Difference Bareness Index by employing Google Earth Engine cloud computing platform. The results suggest that there are differences in the type of land surface temperature, ranging from 15.58 °C to 43.71 °C during the study period. Nevertheless, larger fluctuations in land surface temperature were found in the cover and protective forests of the study area, especially in the northwestern and southeastern parts of the system. These results highlighted the complexity of the relationship between land surface temperature and spectral indices regarding the need for spectral indices.

RevDate: 2025-01-29

Soman VK, V Natarajan (2025)

Crayfish optimization based pixel selection using block scrambling based encryption for secure cloud computing environment.

Scientific reports, 15(1):2406.

Cloud Computing (CC) is a fast emerging field that enables consumers to access network resources on-demand. However, ensuring a high level of security in CC environments remains a significant challenge. Traditional encryption algorithms are often inadequate in protecting confidential data, especially digital images, from complex cyberattacks. The increasing reliance on cloud storage and transmission of digital images has made it essential to develop strong security measures to stop unauthorized access and guarantee the integrity of sensitive information. This paper presents a novel Crayfish Optimization based Pixel Selection using Block Scrambling Based Encryption Approach (CFOPS-BSBEA) technique that offers a unique solution to improve security in cloud environments. By integrating steganography and encryption, the CFOPS-BSBEA technique provides a robust approach to secure digital images. Our key contribution lies in the development of a three-stage process that optimally selects pixels for steganography, encodes secret images using Block Scrambling Based Encryption, and embeds them in cover images. The CFOPS-BSBEA technique leverages the strengths of both steganography and encryption to provide a secure and effective approach to digital image protection. The Crayfish Optimization algorithm is used to select the most suitable pixels for steganography, ensuring that the secret image is embedded in a way that minimizes detection. The Block Scrambling Based Encryption algorithm is then used to encode the secret image, providing an additional layer of security. Experimental results show that the CFOPS-BSBEA technique outperforms existing models in terms of security performance. The proposed approach has significant implications for the secure storage and transmission of digital images in cloud environments, and its originality and novelty make it an attractive contribution to the field. Furthermore, the CFOPS-BSBEA technique has the potential to inspire further research in secure cloud computing environments, making the way for the development of more robust and efficient security measures.

RevDate: 2025-01-17

Kari Balakrishnan A, Chellaperumal A, Lakshmanan S, et al (2025)

A novel efficient data storage and data auditing in cloud environment using enhanced child drawing development optimization strategy.

Network (Bristol, England) [Epub ahead of print].

The optimization on the cloud-based data structures is carried out using Adaptive Level and Skill Rate-based Child Drawing Development Optimization algorithm (ALSR-CDDO). Also, the overall cost required in computing and communicating is reduced by optimally selecting these data structures by the ALSR-CDDO algorithm. The storage of the data in the cloud platform is performed using the Divide and Conquer Table (D&CT). The location table and the information table are generated using the D&CT method. The details, such as the file information, file ID, version number, and user ID, are all present in the information table. Every time data is deleted or updated, and its version number is modified. Whenever an update takes place using D&CT, the location table also gets upgraded. The information regarding the location of a file in the Cloud Service Provider (CSP) is given in the location table. Once the data is stored in the CSP, the auditing of the data is then performed on the stored data. Both dynamic and batch auditing are carried out on the stored data, even if it gets updated dynamically in the CSP. The security offered by the executed scheme is verified by contrasting it with other existing auditing schemes.

RevDate: 2025-01-29

Yan K, Yu X, Liu J, et al (2025)

HiQ-FPAR: A High-Quality and Value-added MODIS Global FPAR Product from 2000 to 2023.

Scientific data, 12(1):72.

The Fraction of Absorbed Photosynthetically Active Radiation (FPAR) is essential for assessing vegetation's photosynthetic efficiency and ecosystem energy balance. While the MODIS FPAR product provides valuable global data, its reliability is compromised by noise, particularly under poor observation conditions like cloud cover. To solve this problem, we developed the Spatio-Temporal Information Composition Algorithm (STICA), which enhances MODIS FPAR by integrating quality control, spatio-temporal correlations, and original FPAR values, resulting in the High-Quality FPAR (HiQ-FPAR) product. HiQ-FPAR shows superior accuracy compared to MODIS FPAR and Sensor-Independent FPAR (SI-FPAR), with RMSE values of 0.130, 0.154, and 0.146, respectively, and R[2] values of 0.722, 0.630, and 0.717. Additionally, HiQ-FPAR exhibits smoother time series in 52.1% of global areas, compared to 44.2% for MODIS. Available on Google Earth Engine and Zenodo, the HiQ-FPAR dataset offers 500 m and 5 km resolution at an 8-day interval from 2000 to 2023, supporting a wide range of FPAR applications.

RevDate: 2025-01-14

Rushton CE, Tate JE, ŠSjödin (2025)

A modern, flexible cloud-based database and computing service for real-time analysis of vehicle emissions data.

Urban informatics, 4(1):1.

In response to the demand for advanced tools in environmental monitoring and policy formulation, this work leverages modern software and big data technologies to enhance novel road transport emissions research. This is achieved by making data and analysis tools more widely available and customisable so users can tailor outputs to their requirements. Through the novel combination of vehicle emissions remote sensing and cloud computing methodologies, these developments aim to reduce the barriers to understanding real-driving emissions (RDE) across urban environments. The platform demonstrates the practical application of modern cloud-computing resources in overcoming the complex demands of air quality management and policy monitoring. This paper shows the potential of modern technological solutions to improve the accessibility of environmental data for policy-making and the broader pursuit of sustainable urban development. The web-application is publicly and freely available at https://cares-public-app.azurewebsites.net.

RevDate: 2025-01-13

Ahmed AA, Farhan K, Ninggal MIH, et al (2024)

Retrieving and Identifying Remnants of Artefacts on Local Devices Using Sync.com Cloud.

Sensors (Basel, Switzerland), 25(1):.

Most current research in cloud forensics is focused on tackling the challenges encountered by forensic investigators in identifying and recovering artifacts from cloud devices. These challenges arise from the diverse array of cloud service providers as each has its distinct rules, guidelines, and requirements. This research proposes an investigation technique for identifying and locating data remnants in two main stages: artefact collection and evidence identification. In the artefacts collection stage, the proposed technique determines the location of the artefacts in cloud storage and collects them for further investigation in the next stage. In the evidence identification stage, the collected artefacts are investigated to identify the evidence relevant to the cybercrime currently being investigated. These two stages perform an integrated process for mitigating the difficulty of locating the artefacts and reducing the time of identifying the relevant evidence. The proposed technique is implemented and tested by applying a forensics investigation algorithm on Sync.com cloud storage using the Microsoft Windows 10 operating system.

RevDate: 2025-01-29
CmpDate: 2025-01-29

Hoyer I, Utz A, Hoog Antink C, et al (2025)

tinyHLS: a novel open source high level synthesis tool targeting hardware accelerators for artificial neural network inference.

Physiological measurement, 13(1):.

Objective.In recent years, wearable devices such as smartwatches and smart patches have revolutionized biosignal acquisition and analysis, particularly for monitoring electrocardiography (ECG). However, the limited power supply of these devices often precludes real-time data analysis on the patch itself.Approach.This paper introduces a novel Python package, tinyHLS (High Level Synthesis), designed to address these challenges by converting Python-based AI models into platform-independent hardware description language code accelerators. Specifically designed for convolutional neural networks, tinyHLS seamlessly integrates into the AI developer's workflow in Python TensorFlow Keras. Our methodology leverages a template-based hardware compiler that ensures flexibility, efficiency, and ease of use. In this work, tinyHLS is first-published featuring templates for several layers of neural networks, such as dense, convolution, max and global average pooling. In the first version, rectified linear unit is supported as activation. It targets one-dimensional data, with a particular focus on time series data.Main results.The generated accelerators are validated in detecting atrial fibrillation on ECG data, demonstrating significant improvements in processing speed (62-fold) and energy efficiency (4.5-fold). Quality of code and synthesizability are ensured by validating the outputs with commercial ASIC design tools.Significance.Importantly, tinyHLS is open-source and does not rely on commercial tools, making it a versatile solution for both academic and commercial applications. The paper also discusses the integration with an open-source RISC-V and potential for future enhancements of tinyHLS, including its application in edge servers and cloud computing. The source code is available on GitHub:https://github.com/Fraunhofer-IMS/tinyHLS.

RevDate: 2025-01-27
CmpDate: 2025-01-27

Scales C, Bai J, Murakami D, et al (2025)

Internal validation of a convolutional neural network pipeline for assessing meibomian gland structure from meibography.

Optometry and vision science : official publication of the American Academy of Optometry, 102(1):28-36.

SIGNIFICANCE: Optimal meibography utilization and interpretation are hindered due to poor lid presentation, blurry images, or image artifacts and the challenges of applying clinical grading scales. These results, using the largest image dataset analyzed to date, demonstrate development of algorithms that provide standardized, real-time inference that addresses all of these limitations.

PURPOSE: This study aimed to develop and validate an algorithmic pipeline to automate and standardize meibomian gland absence assessment and interpretation.

METHODS: A total of 143,476 images were collected from sites across North America. Ophthalmologist and optometrist experts established ground-truth image quality and quantification (i.e., degree of gland absence). Annotated images were allocated into training, validation, and test sets. Convolutional neural networks within Google Cloud VertexAI trained three locally deployable or edge-based predictive models: image quality detection, over-flip detection, and gland absence detection. The algorithms were combined into an algorithmic pipeline onboard a LipiScan Dynamic Meibomian Imager to provide real-time clinical inference for new images. Performance metrics were generated for each algorithm in the pipeline onboard the LipiScan from naive image test sets.

RESULTS: Individual model performance metrics included the following: weighted average precision (image quality detection: 0.81, over-flip detection: 0.88, gland absence detection: 0.84), weighted average recall (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), weighted average F1 score (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.81), overall accuracy (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), Cohen κ (image quality detection: 0.60, over-flip detection: 0.62, and gland absence detection: 0.71), Kendall τb (image quality detection: 0.61, p<0.001, over-flip detection: 0.63, p<0.001, and gland absence detection: 0.67, p<001), and Matthews coefficient (image quality detection: 0.61, over-flip detection: 0.63, and gland absence detection: 0.62). Area under the precision-recall curve (image quality detection: 0.87 over-flip detection: 0.92, gland absence detection: 0.89) and area under the receiver operating characteristic curve (image quality detection: 0.88, over-flip detection: 0.91 gland absence detection: 0.93) were calculated across a common set of thresholds, ranging from 0 to 1.

CONCLUSIONS: Comparison of predictions from each model to expert panel ground-truth demonstrated strong association and moderate to substantial agreement. The findings and performance metrics show that the pipeline of algorithms provides standardized, real-time inference/prediction of meibomian gland absence.

RevDate: 2025-01-13
CmpDate: 2025-01-10

Lu C, Zhou J, Q Zou (2025)

An optimized approach for container deployment driven by a two-stage load balancing mechanism.

PloS one, 20(1):e0317039.

Lightweight container technology has emerged as a fundamental component of cloud-native computing, with the deployment of containers and the balancing of loads on virtual machines representing significant challenges. This paper presents an optimization strategy for container deployment that consists of two stages: coarse-grained and fine-grained load balancing. In the initial stage, a greedy algorithm is employed for coarse-grained deployment, facilitating the distribution of container services across virtual machines in a balanced manner based on resource requests. The subsequent stage utilizes a genetic algorithm for fine-grained resource allocation, ensuring an equitable distribution of resources to each container service on a single virtual machine. This two-stage optimization enhances load balancing and resource utilization throughout the system. Empirical results indicate that this approach is more efficient and adaptable in comparison to the Grey Wolf Optimization (GWO) Algorithm, the Simulated Annealing (SA) Algorithm, and the GWO-SA Algorithm, significantly improving both resource utilization and load balancing performance on virtual machines.

RevDate: 2025-01-12
CmpDate: 2025-01-09

Kuang Y, Cao D, Jiang D, et al (2024)

CPhaMAS: The first pharmacokinetic analysis cloud platform developed by China.

Zhong nan da xue xue bao. Yi xue ban = Journal of Central South University. Medical sciences, 49(8):1290-1300.

OBJECTIVES: Software for pharmacological modeling and statistical analysis is essential for drug development and individualized treatment modeling. This study aims to develop a pharmacokinetic analysis cloud platform that leverages cloud-based benefits, offering a user-friendly interface with a smoother learning curve.

METHODS: The platform was built using Rails as the framework, developed in Julia language, and employs PostgreSQL 14 database, Redis cache, and Sidekiq for asynchronous task management. Four commonly used modules in clinical pharmacology research were developed: Non-compartmental analysis, bioequivalence/bioavailability analysis, compartment model analysis, and population pharmacokinetics modeling. The platform ensured comprehensive data security and traceability through multiple safeguards, including data encryption, access control, transmission encryption, redundant backups, and log management. The platform underwent basic function, performance, reliability, usability, and scalability testing, along with practical case studies.

RESULTS: The CPhaMAS cloud platform successfully implemented the 4 module functionalities. The platform provides a list-based navigation for users, featuring checkbox-style interactions. Through cloud computing, it allows direct online data analysis, saving computer storage and minimizing performance requirements. Modeling and visualization do not require programming knowledge. Basic functionality achieved 100% completion, with an average annual uptime of over 99%. Server response time was between 200 to 500 ms, and average CPU usage was maintained below 30%. In a practical case study, cefotaxime sodium/tazobactam sodium injection (6꞉1 ratio) displayd near-linear pharmacokinetics within a dose range of 1.0 to 4.0 g, with no significant effect of tazobactam on the pharmacokinetic parameters of cefotaxime, validating the platform's usability and reliability.

CONCLUSIONS: CPhaMAS provides an integrated modeling and statistical tool for educators, researchers, and industrial professionals, enabling non-compartmental analysis, bioequivalence/bioavailability analysis, compartmental model building, and population pharmacokinetic modeling and simulation.

RevDate: 2025-02-19
CmpDate: 2025-02-19

Peng W, Hong Y, Chen Y, et al (2025)

AIScholar: An OpenFaaS-enhanced cloud platform for intelligent medical data analytics.

Computers in biology and medicine, 186:109648.

This paper presents AIScholar, an intelligent research cloud platform developed based on artificial intelligence analysis methods and the OpenFaaS serverless framework, designed for intelligent analysis of clinical medical data with high scalability. AIScholar simplifies the complex analysis process by encapsulating a wide range of medical data analytics methods into a series of customizable cloud tools that emphasize ease of use and expandability, within OpenFaaS's serverless computing framework. As a multifaceted auxiliary tool in medical scientific exploration, AIScholar accelerates the deployment of computational resources, enabling clinicians and scientific personnel to derive new insights from clinical medical data with unprecedented efficiency. A case study focusing on breast cancer clinical data underscores the practicality that AIScholar offers to clinicians for diagnosis and decision-making. Insights generated by the platform have a direct impact on the physicians' ability to identify and address clinical issues, signifying its real-world application significance in clinical practice. Consequently, AIScholar makes a meaningful impact on medical research and clinical practice by providing powerful analytical tools to clinicians and scientific personnel, thereby promoting significant advancements in the analysis of clinical medical data.

RevDate: 2025-02-11
CmpDate: 2025-01-08

Nolasco M, M Balzarini (2025)

Assessment of temporal aggregation of Sentinel-2 images on seasonal land cover mapping and its impact on landscape metrics.

Environmental monitoring and assessment, 197(2):142.

Landscape metrics (LM) play a crucial role in fields such as urban planning, ecology, and environmental research, providing insights into the ecological and functional dynamics of ecosystems. However, in dynamic systems, generating thematic maps for LM analysis poses challenges due to the substantial data volume required and issues such as cloud cover interruptions. The aim of this study was to compare the accuracy of land cover maps produced by three temporal aggregation methods: median reflectance, maximum normalised difference vegetation index (NDVI), and a two-date image stack using Sentinel-2 (S2) and then to analyse their implications for LM calculation. The Google Earth Engine platform facilitated data filtering, image selection, and aggregation. A random forest algorithm was employed to classify five land cover classes across ten sites, with classification accuracy assessed using global measurements and the Kappa index. LM were then quantified. The analysis revealed that S2 data provided a high-quality, cloud-free dataset suitable for analysis, ensuring a minimum of 25 cloud-free pixels over the study period. The two-date and median methods exhibited superior land cover classification accuracy compared to the max NDVI method. In particular, the two-date method resulted in lower fragmentation-heterogeneity and complexity metrics in the resulting maps compared to the median and max NDVI methods. Nevertheless, the median method holds promise for integration into operational land cover mapping programmes, particularly for larger study areas exceeding the width of S2 swath coverage. We find patch density combined with conditional entropy to be particularly useful metrics for assessing fragmentation and configuration complexity.

RevDate: 2025-01-10
CmpDate: 2025-01-08

Saeed A, A Khan M, Akram U, et al (2025)

Deep learning based approaches for intelligent industrial machinery health management and fault diagnosis in resource-constrained environments.

Scientific reports, 15(1):1114.

Industry 4.0 represents the fourth industrial revolution, which is characterized by the incorporation of digital technologies, the Internet of Things (IoT), artificial intelligence, big data, and other advanced technologies into industrial processes. Industrial Machinery Health Management (IMHM) is a crucial element, based on the Industrial Internet of Things (IIoT), which focuses on monitoring the health and condition of industrial machinery. The academic community has focused on various aspects of IMHM, such as prognostic maintenance, condition monitoring, estimation of remaining useful life (RUL), intelligent fault diagnosis (IFD), and architectures based on edge computing. Each of these categories holds its own significance in the context of industrial processes. In this survey, we specifically examine the research on RUL prediction, edge-based architectures, and intelligent fault diagnosis, with a primary focus on the domain of intelligent fault diagnosis. The importance of IFD methods in ensuring the smooth execution of industrial processes has become increasingly evident. However, most methods are formulated under the assumption of complete, balanced, and abundant data, which often does not align with real-world engineering scenarios. The difficulties linked to these classifications of IMHM have received noteworthy attention from the research community, leading to a substantial number of published papers on the topic. While there are existing comprehensive reviews that address major challenges and limitations in this field, there is still a gap in thoroughly investigating research perspectives across RUL prediction, edge-based architectures, and complete intelligent fault diagnosis processes. To fill this gap, we undertake a comprehensive survey that reviews and discusses research achievements in this domain, specifically focusing on IFD. Initially, we classify the existing IFD methods into three distinct perspectives: the method of processing data, which aims to optimize inputs for the intelligent fault diagnosis model and mitigate limitations in the training sample set; the method of constructing the model, which involves designing the structure and features of the model to enhance its resilience to challenges; and the method of optimizing training, which focuses on refining the training process for intelligent fault diagnosis models and emphasizes the importance of ideal data in the training process. Subsequently, the survey covers techniques related to RUL prediction and edge-cloud architectures for resource-constrained environments. Finally, this survey consolidates the outlook on relevant issues in IMHM, explores potential solutions, and offers practical recommendations for further consideration.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )