picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
25 Apr 2025 at 01:42
HITS:
3983
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 25 Apr 2025 at 01:42 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-04-23

Xiao J, Wu J, Liu D, et al (2025)

Improved Pine Wood Nematode Disease Diagnosis System Based on Deep Learning.

Plant disease [Epub ahead of print].

Pine wilt disease caused by the pine wood nematode, Bursaphelenchus xylophilus, has profound implications for global forestry ecology. Conventional PCR methods need long operating time and are complicated to perform. The need for rapid and effective detection methodologies to curtail its dissemination and reduce pine felling has become more apparent. This study initially proposed the use of fluorescence recognition for the detection of pine wood nematode disease, accompanied by the development of a dedicated fluorescence detection system based on deep learning. This system possesses the capability to perform excitation, detection, as well as data analysis and transmission of test samples. In exploring fluorescence recognition methodologies, the efficacy of five conventional machine learning algorithms was juxtaposed with that of You Only Look Once version 5 and You Only Look Once version 10, both in the pre- and post-image processing stages. Moreover, enhancements were introduced to the You Only Look Once version 5 model. The network's aptitude for discerning features across varied scales and resolutions was bolstered through the integration of Res2Net. Meanwhile, a SimAM attention mechanism was incorporated into the backbone network, and the original PANet structure was replaced by the Bi-FPN within the Head network to amplify feature fusion capabilities. The enhanced YOLOv5 model demonstrates significant improvements, particularly in the recognition of large-size images, achieving an accuracy improvement of 39.98%. The research presents a novel detection system for pine nematode detection, capable of detecting samples with DNA concentrations as low as 1 fg/μl within 20 min. This system integrates detection instruments, laptops, cloud computing, and smartphones, holding tremendous potential for field application.

RevDate: 2025-04-23

Yin Y, Liu B, Zhang Y, et al (2025)

Wafer-Scale Nanoprinting of 3D Interconnects beyond Cu.

ACS nano [Epub ahead of print].

Cloud operations and services, as well as many other modern computing tasks, require hardware that is run by very densely packed integrated circuits (ICs) and heterogenous ICs. The performance of these ICs is determined by the stability and properties of the interconnects between the semiconductor devices and ICs. Although some ICs with 3D interconnects are commercially available, there has been limited progress on 3D printing utilizing emerging nanomaterials. Moreover, laying out reliable 3D metal interconnects in ICs with the appropriate electrical and physical properties remains challenging. Here, we propose high-throughput 3D interconnection with nanoscale precision by leveraging lines of forces. We successfully nanoprinted multiscale and multilevel Au, Ir, and Ru 3D interconnects on the wafer scale in non-vacuum conditions using a pulsed electric field. The ON phase of the pulsed field initiates in situ printing of nanoparticle (NP) deposition into interconnects, whereas the OFF phase allows the gas flow to evenly distribute the NPs over an entire wafer. Characterization of the 3D interconnects confirms their excellent uniformity, electrical properties, and free-form geometries, far exceeding those of any 3D-printed interconnects. Importantly, their measured resistances approach the theoretical values calculated here. The results demonstrate that 3D nanoprinting can be used to fabricate thinner and faster interconnects, which can enhance the performance of dense ICs; therefore, 3D nanoprinting can complement lithography and resolve the challenges encountered in the fabrication of critical device features.

RevDate: 2025-04-23

Pérez-Sanpablo AI, Quinzaños-Fresnedo J, Gutiérrez-Martínez J, et al (2025)

Transforming Medical Imaging: The Role of Artificial Intelligence Integration in PACS for Enhanced Diagnostic Accuracy and Workflow Efficiency.

Current medical imaging pii:CMIR-EPUB-147831 [Epub ahead of print].

INTRODUCTION: To examine the integration of artificial intelligence (AI) into Picture Archiving and Communication Systems (PACS) and assess its impact on medical imaging, diagnostic workflows, and patient outcomes. This review explores the technological evolution, key advancements, and challenges associated with AI-enhanced PACS in healthcare settings.

METHODS: A comprehensive literature search was conducted in PubMed, Scopus, and Web of Science databases, covering articles from January 2000 to October 2024. Search terms included "artificial intelligence," "machine learning," "deep learning," and "PACS," combined with keywords related to diagnostic accuracy and workflow optimization. Articles were selected based on predefined inclusion and exclusion criteria, focusing on peerreviewed studies that discussed AI applications in PACS, innovations in medical imaging, and workflow improvements. A total of 183 studies met the inclusion criteria, comprising original research, systematic reviews, and meta-analyses.

RESULTS: AI integration in PACS has significantly enhanced diagnostic accuracy, achieving improvements of up to 93.2% in some imaging modalities, such as early tumor detection and anomaly identification. Workflow efficiency has been transformed, with diagnostic times reduced by up to 90% for critical conditions like intracranial hemorrhages. Convolutional neural networks (CNNs) have demonstrated exceptional performance in image segmentation, achieving up to 94% accuracy, and in motion artifact correction, further enhancing diagnostic precision. Natural language processing (NLP) tools have expedited radiology workflows, reducing reporting times by 30-50% and improving consistency in report generation. Cloudbased solutions have also improved accessibility, enabling real-time collaboration and remote diagnostics. However, challenges in data privacy, regulatory compliance, and interoperability persist, emphasizing the need for standardized frameworks and robust security protocols. Conclusions The integration of AI into PACS represents a pivotal transformation in medical imaging, offering improved diagnostic workflows and potential for personalized patient care. Addressing existing challenges and enhancing interoperability will be essential for maximizing the benefits of AIpowered PACS in healthcare.

RevDate: 2025-04-22

Rezaee K, Nazerian A, Ghayoumi Zadeh H, et al (2025)

Smart IoT-driven biosensors for EEG-based driving fatigue detection: A CNN-XGBoost model enhancing healthcare quality.

BioImpacts : BI, 15:30586.

INTRODUCTION: Drowsy driving is a significant contributor to accidents, accounting for 35 to 45% of all crashes. Implementation of an internet of things (IoT) system capable of alerting fatigued drivers has the potential to substantially reduce road fatalities and associated issues. Often referred to as the internet of medical things (IoMT), this system leverages a combination of biosensors, actuators, detectors, cloud-based and edge computing, machine intelligence, and communication networks to deliver reliable performance and enhance quality of life in smart societies.

METHODS: Electroencephalogram (EEG) signals offer potential insights into fatigue detection. However, accurately identifying fatigue from brain signals is challenging due to inter-individual EEG variability and the difficulty of collecting sufficient data during periods of exhaustion. To address these challenges, a novel evolutionary optimization method combining convolutional neural networks (CNNs) and XGBoost, termed CNN-XGBoost Evolutionary Learning, was proposed to improve fatigue identification accuracy. The research explored various subbands of decomposed EEG data and introduced an innovative approach of transforming EEG recordings into RGB scalograms. These scalogram images were processed using a 2D Convolutional Neural Network (2DCNN) to extract essential features, which were subsequently fed into a dense layer for training.

RESULTS: The resulting model achieved a noteworthy accuracy of 99.80% on a substantial driver fatigue dataset, surpassing existing methods.

CONCLUSION: By integrating this approach into an IoT framework, researchers effectively addressed previous challenges and established an artificial intelligence of things (AIoT) infrastructure for critical driving conditions. This IoT-based system optimizes data processing, reduces computational complexity, and enhances overall system performance, enabling accurate and timely detection of fatigue in extreme driving environments.

RevDate: 2025-04-21
CmpDate: 2025-04-19

Alzakari SA, Alamgeer M, Alashjaee AM, et al (2025)

Heuristically enhanced multi-head attention based recurrent neural network for denial of wallet attacks detection on serverless computing environment.

Scientific reports, 15(1):13538.

Denial of Wallet (DoW) attacks are a cyber threat designed to utilize and deplete an organization's financial resources by generating excessive prices or charges in their cloud computing (CC) and serverless computing platforms. These threats are primarily appropriate in serverless manners because of features such as auto-scaling, pay-as-you-go, restricted control, and cost growth. Serverless computing, frequently recognized as Function-as-a-Service (FaaS), is a CC method that permits designers to construct and run uses without the requirement to accomplish typical server structure. Detecting DoW threats involves monitoring and analyzing the system-level resource consumption of specific bare-metal mechanisms. Efficient and precise detection of internal DoW threats remains a crucial challenge. Timely recognition is significant in preventing potential damage, as DoW attacks exploit the financial model of serverless environments, impacting the cost structure and operational integrity of services. In this study, a Multi-Head Attention-based Recurrent Neural Network for Denial of Wallet Attacks Detection (MHARNN-DoWAD) technique is developed. The MHARNN-DoWAD method enables the detection of DoW attacks on serverless computing environments. At first, the presented MHARNN-DoWAD model performs data preprocessing by using min-max normalization to convert input data into constant format. Next, the wolf pack predation (WPP) method is employed for feature selection. The detection and classification of DoW attacks, the multi-head attention-based bi-directional gated recurrent unit (MHA-BiGRU) model is utilized. Eventually, the improved secretary bird optimizer algorithm (ISBOA)-based hyperparameter choice process is accomplished to optimize the detection results of the MHA-BiGRU model. A comprehensive set of simulations was conducted to demonstrate the promising results of the MHARNN-DoWAD method. The experimental validation of the MHARNN-DoWAD technique portrayed a superior accuracy value of 98.30% over existing models.

RevDate: 2025-04-18

Brito CV, Ferreira PG, JT Paulo (2025)

Exploiting Trusted Execution Environments and Distributed Computation for Genomic Association Tests.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

Breakthroughs in sequencing technologies led to an exponential growth of genomic data, providing novel biological insights and therapeutic applications. However, analyzing large amounts of sensitive data raises key data privacy concerns, specifically when the information is outsourced to untrusted third-party infrastructures for data storage and processing (e.g., cloud computing). We introduce Gyosa, a secure and privacy-preserving distributed genomic analysis solution. By leveraging trusted execution environments (TEEs), Gyosa allows users to confidentially delegate their GWAS analysis to untrusted infrastructures. Gyosa implements a computation partitioning scheme that reduces the computation done inside the TEEs while safeguarding the users' genomic data privacy. By integrating this security scheme in Glow, Gyosa provides a secure and distributed environment that facilitates diverse GWAS studies. The experimental evaluation validates the applicability and scalability of Gyosa, reinforcing its ability to provide enhanced security guarantees.

RevDate: 2025-04-17

Kocak B, Ponsiglione A, Romeo V, et al (2025)

Radiology AI and sustainability paradox: environmental, economic, and social dimensions.

Insights into imaging, 16(1):88.

Artificial intelligence (AI) is transforming radiology by improving diagnostic accuracy, streamlining workflows, and enhancing operational efficiency. However, these advancements come with significant sustainability challenges across environmental, economic, and social dimensions. AI systems, particularly deep learning models, require substantial computational resources, leading to high energy consumption, increased carbon emissions, and hardware waste. Data storage and cloud computing further exacerbate the environmental impact. Economically, the high costs of implementing AI tools often outweigh the demonstrated clinical benefits, raising concerns about their long-term viability and equity in healthcare systems. Socially, AI risks perpetuating healthcare disparities through biases in algorithms and unequal access to technology. On the other hand, AI has the potential to improve sustainability in healthcare by reducing low-value imaging, optimizing resource allocation, and improving energy efficiency in radiology departments. This review addresses the sustainability paradox of AI from a radiological perspective, exploring its environmental footprint, economic feasibility, and social implications. Strategies to mitigate these challenges are also discussed, alongside a call for action and directions for future research. CRITICAL RELEVANCE STATEMENT: By adopting an informed and holistic approach, the radiology community can ensure that AI's benefits are realized responsibly, balancing innovation with sustainability. This effort is essential to align technological advancements with environmental preservation, economic sustainability, and social equity. KEY POINTS: AI has an ambivalent potential, capable of both exacerbating global sustainability issues and offering increased productivity and accessibility. Addressing AI sustainability requires a broad perspective accounting for environmental impact, economic feasibility, and social implications. By embracing the duality of AI, the radiology community can adopt informed strategies at individual, institutional, and collective levels to maximize its benefits while minimizing negative impacts.

RevDate: 2025-04-16
CmpDate: 2025-04-16

Ansari N, Kumari P, Kumar R, et al (2025)

Seasonal patterns of air pollution in Delhi: interplay between meteorological conditions and emission sources.

Environmental geochemistry and health, 47(5):175.

Air pollution (AP) poses a significant public health risk, particularly in developing countries, where it contributes to a growing prevalence of health issues. This study investigates seasonal variations in key air pollutants, including particulate matter, nitrogen dioxide (NO2), sulfur dioxide (SO2), carbon monoxide (CO), and ozone (O3), in New Delhi during 2024. Utilizing Sentinel-5 satellite data processed through the Google earth engine (GEE), a cloud-based geospatial analysis platform, the study evaluates pollutant dynamics during pre-monsoon and post-monsoon seasons. The methodology involved programming in JavaScript to extract pollution parameters, applying cloud filters to eliminate contaminated data, and generating average pollution maps at monthly, seasonal, and annual intervals. The results revealed distinct seasonal pollution patterns. Pre-monsoon root mean square error (RMSE) values for CO, NO2, SO2, and O3 were 0.13, 2.58, 4.62, and 2.36, respectively, while post-monsoon values were 0.17, 2.41, 4.31, and 4.60. Winter months exhibited the highest pollution levels due to increased emissions from biomass burning, vehicular activity, and industrial operations, coupled with atmospheric inversions. Conversely, monsoon months saw a substantial reduction in pollutant levels due to wet deposition and improved dispersion driven by stronger winds. Additionally, post-monsoon crop residue burning emerged as a major episodic pollution source. This study underscores the utility of Sentinel-5 products in monitoring urban air pollution and provides valuable insights for policymakers to develop targeted mitigation strategies, particularly for urban megacities like Delhi, where seasonal and source-specific interventions are crucial for reducing air pollution and its associated health risks.

RevDate: 2025-04-16

Zao JK, Wu JT, Kanyimbo K, et al (2024)

Design of a Trustworthy Cloud-Native National Digital Health Information Infrastructure for Secure Data Management and Use.

Oxford open digital health, 2:oqae043.

Since 2022, Malawi Ministry of Health (MoH) designated the development of a National Digital Health Information System (NDHIS) as one of the most important pillars of its national health strategy. This system is built upon a distributed computing infrastructure employing the following state-of-art technologies: (i) digital healthcare devices to capture medical data; (ii) Kubernetes-based Cloud-Native Computing architecture to simplify system management and service deployment; (iii) Zero-Trust Secure Communication to protect confidentiality, integrity and access rights of medical data transported over the Internet; (iv) Trusted Computing to allow medical data to be processed by certified software without compromising data privacy and sovereignty. Trustworthiness, including reliability, security, privacy and business integrity, of this system was ensured by a peer-to-peer network of trusted medical information guards deployed as the gatekeepers of the computing facility on this system. This NDHIS can facilitate Malawi to attain universal health coverage by 2030 through its scalability and operation efficiency. It shall improve medical data quality and security by adopting a paperless approach. It will also enable MoH to offer data rental services to healthcare researchers and AI model developers around the world. This project is spearheaded by the Digital Health Division (DHD) under MoH. The trustworthy computing infrastructure was designed by a taskforce assembled by the DHD in collaboration with Luke International in Norway, and a consortium of hardware and software solution providers in Taiwan. A prototype that can connect community clinics with a district hospital has been tested at Taiwan Pingtung Christian Hospital.

RevDate: 2025-04-12

Dessevres E, Valderrama M, M Le Van Quyen (2025)

Artificial intelligence for the detection of interictal epileptiform discharges in EEG signals.

Revue neurologique pii:S0035-3787(25)00492-8 [Epub ahead of print].

INTRODUCTION: Over the past decades, the integration of modern technologies - such as electronic health records, cloud computing, and artificial intelligence (AI) - has revolutionized the collection, storage, and analysis of medical data in neurology. In epilepsy, Interictal Epileptiform Discharges (IEDs) are the most established biomarker, indicating an increased likelihood of seizures. Their detection traditionally relies on visual EEG assessment, a time-consuming and subjective process contributing to a high misdiagnosis rate. These limitations have spurred the development of automated AI-driven approaches aimed at improving accuracy and efficiency in IED detection.

METHODS: Research on automated IED detection began 45 years ago, spanning from morphological methods to deep learning techniques. In this review, we examine various IED detection approaches, evaluating their performance and limitations.

RESULTS: Traditional machine learning and deep learning methods have produced the most promising results to date, and their application in IED detection continues to grow. Today, AI-driven tools are increasingly integrated into clinical workflows, assisting clinicians in identifying abnormalities while reducing false-positive rates.

DISCUSSION: To optimize the clinical implementation of automated AI-based IED detection, it is essential to render the codes publicly available and to standardize the datasets and metrics. Establishing uniform benchmarks will enable objective model comparisons and help determine which approaches are best suited for clinical use.

RevDate: 2025-04-12
CmpDate: 2025-04-12

Ianculescu M, Constantin VȘ, Gușatu AM, et al (2025)

Enhancing Connected Health Ecosystems Through IoT-Enabled Monitoring Technologies: A Case Study of the Monit4Healthy System.

Sensors (Basel, Switzerland), 25(7): pii:s25072292.

The Monit4Healthy system is an IoT-enabled health monitoring solution designed to address critical challenges in real-time biomedical signal processing, energy efficiency, and data transmission. The system's modular design merges wireless communication components alongside a number of physiological sensors, including galvanic skin response, electromyography, photoplethysmography, and EKG, to allow for the remote gathering and evaluation of health information. In order to decrease network load and enable the quick identification of abnormalities, edge computing is used for real-time signal filtering and feature extraction. Flexible data transmission based on context and available bandwidth is provided through a hybrid communication approach that includes Bluetooth Low Energy and Wi-Fi. Under typical monitoring scenarios, laboratory testing shows reliable wireless connectivity and ongoing battery-powered operation. The Monit4Healthy system is appropriate for scalable deployment in connected health ecosystems and portable health monitoring due to its responsive power management approaches and structured data transmission, which improve the resiliency of the system. The system ensures the reliability of signals whilst lowering latency and data volume in comparison to conventional cloud-only systems. Limitations include the requirement for energy profiling, distinctive hardware miniaturizing, and sustained real-world validation. By integrating context-aware processing, flexible design, and effective communication, the Monit4Healthy system complements existing IoT health solutions and promotes better integration in clinical and smart city healthcare environments.

RevDate: 2025-04-12

Almuseelem W (2025)

Deep Reinforcement Learning-Enabled Computation Offloading: A Novel Framework to Energy Optimization and Security-Aware in Vehicular Edge-Cloud Computing Networks.

Sensors (Basel, Switzerland), 25(7): pii:s25072039.

The Vehicular Edge-Cloud Computing (VECC) paradigm has gained traction as a promising solution to mitigate the computational constraints through offloading resource-intensive tasks to distributed edge and cloud networks. However, conventional computation offloading mechanisms frequently induce network congestion and service delays, stemming from uneven workload distribution across spatial Roadside Units (RSUs). Moreover, ensuring data security and optimizing energy usage within this framework remain significant challenges. To this end, this study introduces a deep reinforcement learning-enabled computation offloading framework for multi-tier VECC networks. First, a dynamic load-balancing algorithm is developed to optimize the balance among RSUs, incorporating real-time analysis of heterogeneous network parameters, including RSU computational load, channel capacity, and proximity-based latency. Additionally, to alleviate congestion in static RSU deployments, the framework proposes deploying UAVs in high-density zones, dynamically augmenting both storage and processing resources. Moreover, an Advanced Encryption Standard (AES)-based mechanism, secured with dynamic one-time encryption key generation, is implemented to fortify data confidentiality during transmissions. Further, a context-aware edge caching strategy is implemented to preemptively store processed tasks, reducing redundant computations and associated energy overheads. Subsequently, a mixed-integer optimization model is formulated that simultaneously minimizes energy consumption and guarantees latency constraint. Given the combinatorial complexity of large-scale vehicular networks, an equivalent reinforcement learning form is given. Then a deep learning-based algorithm is designed to learn close-optimal offloading solutions under dynamic conditions. Empirical evaluations demonstrate that the proposed framework significantly outperforms existing benchmark techniques in terms of energy savings. These results underscore the framework's efficacy in advancing sustainable, secure, and scalable intelligent transportation systems.

RevDate: 2025-04-12

Hodkiewicz M, Lukens S, Brundage MP, et al (2021)

Rethinking Maintenance Terminology for an Industry 4.0 Future.

International journal of prognostics and health management, 12(1):.

Sensors and mathematical models have been used since the 1990's to assess the health of systems and diagnose anomalous behavior. The advent of the Internet of Things (IoT) increases the range of assets on which data can be collected cost effectively. Cloud-computing and the wider availability of data and models are democratizing the implementation of prognostic health (PHM) technologies. Together, these advancements and other Industry 4.0 developments are creating a paradigm shift in how maintenance work is planned and executed. In this new future, maintenance will be initiated once a potential failure has been detected (using PHM) and thus completed before a functional failure has occurred. Thus corrective work is required since corrective work is defined as "work done to restore the function of an asset after failure or when failure is imminent." Many metrics for measuring the effectiveness of maintenance work management are grounded in a negative perspective of corrective work and do not clearly capture work arising from condition monitoring and predictive modeling investments. In this paper, we use case studies to demonstrate the need to rethink maintenance terminology. The outcomes of this work include 1) definitions to be used for consistent evaluation of work management performance in an Industry 4.0 future and 2) recommendations to improve detection of work related to PHM activities.

RevDate: 2025-04-10

Khan A, Ullah F, Shah D, et al (2025)

EcoTaskSched: a hybrid machine learning approach for energy-efficient task scheduling in IoT-based fog-cloud environments.

Scientific reports, 15(1):12296.

The widespread adoption of cloud services has posed several challenges, primarily revolving around energy and resource efficiency. Integrating cloud and fog resources can help address these challenges by improving fog-cloud computing environments. Nevertheless, the search for optimal task allocation and energy management in such environments continues. Existing studies have introduced notable solutions; however, it is still a challenging issue to efficiently utilize these heterogeneous cloud resources and achieve energy-efficient task scheduling in fog-cloud of things environment. To tackle these challenges, we propose a novel ML-based EcoTaskSched model, which leverages deep learning for energy-efficient task scheduling in fog-cloud networks. The proposed hybrid model integrates Convolutional Neural Networks (CNNs) with Bidirectional Log-Short Term Memory (BiLSTM) to enhance energy-efficient schedulability and reduce energy usage while ensuring QoS provisioning. The CNN model efficiently extracts workload features from tasks and resources, while the BiLSTM captures complex sequential information, predicting optimal task placement sequences. A real fog-cloud environment is implemented using the COSCO framework for the simulation setup together with four physical nodes from the Azure B2s plan to test the proposed model. The DeFog benchmark is used to develop task workloads, and data collection was conducted for both normal and intense workload scenarios. Before preprocessing the data was normalized, treated with feature engineering and augmentation, and then split into training and test sets. To evaluate performance, the proposed EcoTaskSched model demonstrated superiority by significantly reducing energy consumption and improving job completion rates compared to baseline models. Additionally, the EcoTaskSched model maintained a high job completion rate of 85%, outperforming GGCN and BiGGCN. It also achieved a lower average response time, and SLA violation rates, as well as increased throughput, and reduced execution cost compared to other baseline models. In its optimal configuration, the EcoTaskSched model is successfully applied to fog-cloud computing environments, increasing task handling efficiency and reducing energy consumption while maintaining the required QoS parameters. Our future studies will focus on long-term testing of the EcoTaskSched model in real-world IoT environments. We will also assess its applicability by integrating other ML models, which could provide enhanced insights for optimizing scheduling algorithms across diverse fog-cloud settings.

RevDate: 2025-04-10

Wang Y, Kong D, Chai H, et al (2025)

D2D assisted cooperative computational offloading strategy in edge cloud computing networks.

Scientific reports, 15(1):12303.

In the computational offloading problem of edge cloud computing (ECC), almost all researches develop the offloading strategy by optimizing the user cost, but most of them only consider the delay and energy consumption, and seldom consider the task waiting delay. This is very unfavorable for tasks with high sensitive latency requirements in the current era of intelligence. In this paper, by using D2D (Device-to-Device) technology, we propose a D2D-assisted collaboration computational offloading strategy (D-CCO) based on user cost optimization to obtain the offloading decision and the number of tasks that can be offloaded. Specifically, we first build a task queue system with multiple local devices, peer devices and edge processors, and compare the execution performance of computing tasks on different devices, taking into account user costs such as task delay, power consumption, and wait delay. Then, the stochastic optimization algorithm and the back-pressure algorithm are used to develop the offloading strategy, which ensures the stability of the system and reduces the computing cost to the greatest extent, so as to obtain the optimal offloading decision. In addition, the stability of the proposed algorithm is analyzed theoretically, that is, the upper bounds of all queues in the system are derived. The simulation results show the stability of the proposed algorithm, and demonstrate that the D-CCO algorithm is superior to other alternatives. Compared with other algorithms, this algorithm can effectively reduce the user cost.

RevDate: 2025-04-09

Zhong A, Wang Z, Y Gen (2025)

Research on water body information extraction and monitoring in high water table mining areas based on Google Earth Engine.

Scientific reports, 15(1):12133.

The extensive and intensive exploitation of coal resources has led to a particularly prominent issue of water accumulation in high groundwater table mining areas, significantly impacting the surrounding ecological environment and directly threatening the red line of cultivated land and regional food security. To provide a scientific basis for the ecological restoration of water accumulation areas in coal mining subsidence, a study on the extraction of water body information in high groundwater level subsidence areas is conducted. The spectral characteristics of land types within mining subsidence areas were analyzed through the application of the Google Earth Engine (GEE) big data cloud platform and Landsat series imagery. This study addressed technical bottlenecks in applying traditional water indices in mining areas, such as spectral interference from coal slag, under-detection of small water bodies, and misclassification of agricultural fields. An Improved Normalized Difference Water Index (INDWI) was proposed based on the analysis of spectral characteristics of surface objects, in conjunction with the OTSU algorithm. The effectiveness of water body extraction using INDWI was compared with that of Normalized Difference Water Index (NDWI), Enhanced Water Index (EWI), and Modified Normalized Difference Water Index (MNDWI). The results indicated that: (1) The INDWI demonstrated the highest overall accuracy, surpassing 89%, and a Kappa coefficient exceeding 80%. The extraction of water body information in mining areas was significantly superior to that achieved by the other three prevalent water indices. (2) The extraction results of the MNDWI and INDWI water Index generally aligned with the actual conditions. The boundaries of water bodies extracted using MNDWI in mining subsidence areas were somewhat ambiguous, leading to the misidentification of small water accumulation pits and misclassification of certain agricultural fields. In contrast, the extraction results of INDWI exhibited better alignment with the imagery, with no significant identification errors observed. (3) Through the comparison of three typical areas, it was concluded that the clarity of the water body boundary lines extracted by INDWI was higher, with relatively fewer internal noise points, and the soil ridges and bridges within the water bodies were distinctly visible, aligning with the actual situation. The research findings offer a foundation for the formulation of land reclamation and ecological restoration plans in coal mining subsidence areas.

RevDate: 2025-04-10

Salman S, Gu Q, Dherin B, et al (2023)

Hemorrhage Evaluation and Detector System for Underserved Populations: HEADS-UP.

Mayo Clinic proceedings. Digital health, 1(4):547-556.

OBJECTIVE: To create a rapid, cloud-based, and deployable machine learning (ML) method named hemorrhage evaluation and detector system for underserved populations, potentially across the Mayo Clinic enterprise, then expand to involve underserved areas and detect the 5 subtypes of intracranial hemorrhage (IH).

METHODS: We used Radiological Society of North America dataset for IH detection. We made 4 total iterations using Google Cloud Vertex AutoML. We trained an AutoML model with 2000 images, followed by 6000 images from both IH positive and negative classes. Pixel values were measured by the Hounsfield units, presenting a width of 80 Hounsfield and a level of 40 Hounsfield as the bone window. This was followed by a more detailed image preprocessing approach by combining the pixel values from each of the brain, subdural, and soft tissue window-based gray-scale images into R(red)-channel, G(green)-channel, and B(blue)-channel images to boost the binary IH classification performance. Four experiments with AutoML were applied to study the effects of training sample size and image preprocessing on model performance.

RESULTS: Out of the 4 AutoML experiments, the best-performing model was the fourth experiment, where 95.80% average precision, 91.40% precision, and 91.40% recall were achieved. On the basis of this analysis, our binary IH classifier hemorrhage evaluation and detector system for underserved populations appeared both accurate and performed well.

CONCLUSION: Hemorrhage evaluation and detector system for underserved populations is a rapid, cloud-based, deployable ML method to detect IH. This tool can help expedite the care of patients with IH in resource-limited hospitals.

RevDate: 2025-04-09
CmpDate: 2025-04-09

Haddad T, Kumarapeli P, de Lusignan S, et al (2025)

Software Quality Injection (QI): A Quality Driven Holistic Approach for Optimising Big Healthcare Data Processing.

Studies in health technology and informatics, 323:141-145.

The rapid growth of big data is driving innovation in software development, with advanced analytics offering transformative opportunities in applied computing. Big Healthcare Data (BHD), characterised by multi-structured and complex data types, requires resilient and scalable architectures to effectively address critical data quality issues. This paper proposes a holistic framework for adopting advanced cloud-computing strategies to manage and optimise the unique characteristics of BHD processing. It outlines a comprehensive approach for ensuring optimal data handling for critical healthcare workflows by enhancing the system's quality attributes. The proposed framework prioritises and dynamically adjusts software functionalities in real-time, harnessing sophisticated orchestration capabilities to manage complex, multi-dimensional healthcare datasets, streamline operations, and bolster system resilience.

RevDate: 2025-04-08
CmpDate: 2025-04-09

Landais P, Gueguen S, Clement A, et al (2025)

The RaDiCo information system for rare disease cohorts.

Orphanet journal of rare diseases, 20(1):166.

BACKGROUND: Rare diseases (RDs) clinical care and research face several challenges. Patients are dispersed over large geographic areas, their number per disease is limited, just like the number of researchers involved. Current databases as well as biological collections, when existing, are generally local, of modest size, incomplete, of uneven quality, heterogeneous in format and content, and rarely accessible or standardised to support interoperability. Most disease phenotypes are complex corresponding to multi-systemic conditions, with insufficient interdisciplinary cooperation. Thus emerged the need to generate, within a coordinated, mutualised, secure and interoperable framework, high-quality data from national or international RD cohorts, based on deep phenotyping, including molecular analysis data, notably genotypic. The RaDiCo program objective was to create, under the umbrella of Inserm, a national operational platform dedicated to the development of RD e-cohorts. Its Information System (IS) is presented here.

MATERIAL AND METHODS: Constructed on the cloud computing principle, the RaDiCo platform was designed to promote mutualization and factorization of processes and services, for both clinical epidemiology support and IS. RaDiCo IS is based on an interoperability framework combining a unique RD identifier, data standardisation, FAIR principles, data exchange flows/processes and data security principles compliant with the European GDPR.

RESULTS: RaDiCo IS favours a secure, open-source web application in order to implement and manage online databases and give patients themselves the opportunity to collect their data. It ensures a continuous monitoring of data quality and consistency over time. RaDiCo IS proved to be efficient, currently hosting 13 e-cohorts, covering 67 distinct RDs. As of April 2024, 8063 patients were recruited from 180 specialised RD sites spread across the national territory.

DISCUSSION: The RaDiCo operational platform is equivalent to a national infrastructure. Its IS enables RD e-cohorts to be developed on a shared platform with no limit on size or number. Compliant with the GDPR, it is compatible with the French National Health Data Hub and can be extended to the RDs European Reference Networks (ERNs).

CONCLUSION: RaDiCo provides a robust IS, compatible with the French Data Hub and RDs ERNs, integrated on a RD platform that enables e-cohorts creation, monitoring and analysis.

RevDate: 2025-04-09

Lilhore UK, Simaiya S, Prajapati YN, et al (2025)

A multi-objective approach to load balancing in cloud environments integrating ACO and WWO techniques.

Scientific reports, 15(1):12036.

Effective load balancing and resource allocation are essential in dynamic cloud computing environments, where the demand for rapidity and continuous service is perpetually increasing. This paper introduces an innovative hybrid optimisation method that combines water wave optimization (WWO) and ant colony optimization (ACO) to tackle these challenges effectively. ACO is acknowledged for its proficiency in conducting local searches effectively, facilitating the swift discovery of high-quality solutions. In contrast, WWO specialises in global exploration, guaranteeing extensive coverage of the solution space. Collectively, these methods harness their distinct advantages to enhance various objectives: decreasing response times, maximising resource efficiency, and lowering operational expenses. We assessed the efficacy of our hybrid methodology by conducting extensive simulations using a cloud-sim simulator and a variety of workload trace files. We assessed our methods in comparison to well-established algorithms, such as WWO, genetic algorithm (GA), spider monkey optimization (SMO), and ACO. Key performance indicators, such as task scheduling duration, execution costs, energy consumption, and resource utilisation, were meticulously assessed. The findings demonstrate that the hybrid WWO-ACO approach enhances task scheduling efficiency by 11%, decreases operational expenses by 8%, and lowers energy usage by 12% relative to conventional methods. In addition, the algorithm consistently achieved an impressive equilibrium in resource allocation, with balance values ranging from 0.87 to 0.95. The results emphasise the hybrid WWO-ACO algorithm's substantial impact on improving system performance and customer satisfaction, thereby demonstrating a significant improvement in cloud computing optimisation techniques.

RevDate: 2025-04-09

Sebin D, Doda V, S Balamani (2025)

Schema: A Quantified Learning Solution to Augment, Assess, and Analyze Learning in Medicine.

Cureus, 17(4):e81803.

Quantified learning is the use of digital technologies, such as mobile applications, cloud-based analytics, machine learning algorithms, and real-time performance tracking systems, to deliver more granular, personalized, and measurable educational experiences and outcomes. These principles, along with horizontal and vertical integrative learning, form the basis of modern learning methods. As we witness a global shift from traditional learning to competency-based education, educators agree that there is a need to promote quantified learning. The increased accessibility of technology in educational institutions has allowed unprecedented innovation in learning. The convergence of mobile computing, cloud computing, and Web 2.0 tools has made such models more practical. Despite this, little has been achieved in medical education, where quantified learning and technology aids are limited to a few institutions and used mainly in simulated classroom environments. This innovation report describes the development, dynamics, and scope of Schema, an app-based e-learning solution designed for undergraduate medical students to promote quantified, integrative, high-yield, and self-directed learning along with feedback-based self-assessment and progress monitoring. Schema is linked to a database of preclinical, paraclinical, and clinical multiple choice questions (MCQs) that it organizes into granular subtopics independent of the core subject. It also monitors the progress and performance of the learner as they solve these MCQs and converts that information into quantifiable visual feedback for the learners, which is used to target, improve, revise, and assess their competency. This is important considering the new generation of medical students open to introducing themselves to technology, novel study techniques, and resources outside the traditional learning environment of a medical school. Schema was made available to medical students as part of an e-learning platform in 2022 to aid their learning. In addition, we also aim to use Schema and the range of possibilities it offers to gain deeper insights into the way we learn medicine.

RevDate: 2025-04-08

Xu Z, Zhou W, Han H, et al (2025)

A secure and scalable IoT access control framework with dynamic attribute updates and policy hiding.

Scientific reports, 15(1):11913.

With the rapid rise of Internet of Things (IoT) technology, cloud computing and attribute-based encryption (ABE) are often employed to safeguard the privacy and security of IoT data. However, most blockchain based access control methods are one-way, and user access policies are public, which cannot simultaneously meet the needs of dynamic attribute updates, two-way verification of users and data, and secure data transmission. To handle such challenges, we propose an attribute-based encryption scheme that satisfies real-time and secure sharing requirements through attribute updates and policy hiding. First, we designed a new dynamic update and policy hiding bidirectional attribute access control (DUPH-BAAC) scheme. In addition, a strategy hiding technique was adopted. The data owner sends encrypted addresses with hidden access policies to the blockchain network for verification through transactions. Then, the user locally matches attributes, the smart contract verifies user permissions, and generates access transactions for users who meet access policies. Moreover, the cloud server receives user identity keys and matches the user attribute set with the ciphertext attribute set. Besides, blockchain networks replace traditional IoT centralized servers for identity authentication, authorization, key management, and attribute updates, reducing information leakage risk. Finally, we demonstrate that the DUPH-BAAC scheme can resist indistinguishable choice access structures and selective plaintext attacks, achieving IND-sAS-CPA security.

RevDate: 2025-04-07
CmpDate: 2025-04-07

Pan X, Wang Z, Feng G, et al (2025)

Automated mapping of land cover in Google Earth Engine platform using multispectral Sentinel-2 and MODIS image products.

PloS one, 20(4):e0312585.

Land cover mapping often utilizes supervised classification, which can have issues with insufficient sample size and sample confusion, this study assessed the accuracy of a fast and reliable method for automatic labeling and collection of training samples. Based on the self-programming in Google Earth Engine (GEE) cloud-based platform, a large and reliable training dataset of multispectral Sentinel-2 image was extracted automatically across the study area from the existing MODIS land cover product. To enhance confidence in high-quality training class labels, homogeneous 20 m Sentinel-2 pixels within each 500 m MODIS pixel were selected and a minority of heterogeneous 20 m pixels were removed based on calculations of spectral centroid and Euclidean distance. Further, the quality control and spatial filter were applied for all land cover classes to generate a reliable and representative training dataset that was subsequently applied to train the Classification and Regression Tree (CART), Random Forest (RF), and Support Vector Machine (SVM) classifiers. The results shows that the main land cover types in the study area as distinguished by three different classifiers were Evergreen Broadleaf Forests, Mixed Forests, Woody Savannas, and Croplands. In the training and validation samples, the numbers of correctly classified pixels under the CART without computationally intensive were more than those for the RF and SVM classifiers. Moreover, the user's and producer's accuracies, overall accuracy and kappa coefficient of the CART classifier were the best, indicating the CART classifier was more suitable to this automatic workflow for land cover mapping. The proposed method can automatically generate a large number of reliable and accurate training samples in a timely manner, which is promising for future land cover mapping in a large-scale region.

RevDate: 2025-04-06

Pantic IV, S Mugosa (2025)

Artificial intelligence strategies based on random forests for detection of AI-generated content in public health.

Public health, 242:382-387 pii:S0033-3506(25)00148-9 [Epub ahead of print].

OBJECTIVES: To train and test a Random Forest machine learning model with the ability to distinguish AI-generated from human-generated textual content in the domain of public health, and public health policy.

STUDY DESIGN: Supervised machine learning study.

METHODS: A dataset comprising 1000 human-generated and 1000 AI-generated paragraphs was created. Textual features were extracted using TF-IDF vectorization which calculates term frequency (TF) and Inverse document frequency (IDF), and combines the two measures to produce a score for individual terms. The Random Forest model was trained and tested using the Scikit-Learn library and Jupyter Notebook service in the Google Colab cloud-based environment, with Google CPU hardware acceleration.

RESULTS: The model achieved a classification accuracy of 81.8 % and an area under the ROC curve of 0.9. For human-generated content, precision, recall, and F1-score were 0.85, 0.78, and 0.81, respectively. For AI-generated content, these metrics were 0.79, 0.86, and 0.82. The MCC value of 0.64 indicated moderate to strong predictive power. The model demonstrated robust sensitivity (recall for AI-generated class) of 0.86 and specificity (recall for human-generated class) of 0.78.

CONCLUSIONS: The model exhibited acceptable performance, as measured by classification accuracy, area under the receiver operating characteristic curve, and other metrics. This approach can be further improved by incorporating additional supervised machine learning techniques and serves as a foundation for the future development of a sophisticated and innovative AI system. Such a system could play a crucial role in combating misinformation and enhancing public trust across various government platforms, media outlets, and social networks.

RevDate: 2025-04-05

Jin X, Deng A, Fan Y, et al (2025)

Diversity, functionality, and stability: shaping ecosystem multifunctionality in the successional sequences of alpine meadows and alpine steppes on the Qinghai-Tibet Plateau.

Frontiers in plant science, 16:1436439.

Recent investigations on the Tibetan Plateau have harnessed advancements in digital ground vegetation surveys, high temporal resolution remote sensing data, and sophisticated cloud computing technologies to delineate successional dynamics between alpine meadows and alpine steppes. However, these efforts have not thoroughly explored how different successional stages affect key ecological parameters, such as species and functional diversity, stability, and ecosystem multifunctionality, which are fundamental to ecosystem resilience and adaptability. Given this gap, we systematically investigate variations in vegetation diversity, functional diversity, and the often-overlooked dimension of community stability across the successional gradient from alpine meadows to alpine steppes. We further identify the primary environmental drivers of these changes and evaluate their collective impact on ecosystem multifunctionality. Our analysis reveals that, as vegetation communities progress from alpine meadows toward alpine steppes, multi-year average precipitation and temperature decline significantly, accompanied by reductions in soil nutrients. These environmental shifts led to decreased species diversity, driven by lower precipitation and reduced soil nitrate-nitrogen levels, as well as community differentiation influenced by declining soil pH and precipitation. Consequently, as species loss and community differentiation intensified, these changes diminished functional diversity and eroded community resilience and resistance, ultimately reducing grassland ecosystem multifunctionality. Using linear mixed-effects model and structural equation modeling, we found that functional diversity is the foremost determinant of ecosystem multifunctionality, followed by species diversity. Surprisingly, community stability also significantly influences ecosystem multifunctionality-a factor rarely highlighted in previous studies. These findings deepen our understanding of the interplay among diversity, functionality, stability, and ecosystem multifunctionality, and support the development of an integrated feedback model linking environmental drivers with ecological attributes in alpine grassland ecosystems.

RevDate: 2025-04-04

Zonghui W, Veniaminovna KO, Vladimirovna VO, et al (2025)

Sustainability in construction economics as a barrier to cloud computing adoption in small-scale Building projects.

Scientific reports, 15(1):11329.

The application of intelligent technology to enhance decision-making, optimize processes, and boost project economics and sustainability has the potential to significantly revolutionize the construction industry. However, there are several barriers to its use in small-scale construction projects in China. This study aims to identify these challenges and provide solutions. Using a mixed-methods approach that incorporates quantitative analysis, structural equation modeling, and a comprehensive literature review, the study highlights key problems. These include specialized challenges, difficulty with data integration, financial and cultural constraints, privacy and ethical issues, limited data accessibility, and problems with scalability and connection. The findings demonstrate how important it is to get rid of these barriers to fully utilize intelligent computing in the construction sector. There are recommendations and practical strategies provided to help industry participants get over these challenges. Although the study's geographical emphasis and cross-sectional approach are limitations, they also offer opportunities for further investigation. This study contributes significantly to the growing body of knowledge on intelligent computing in small-scale construction projects and offers practical guidance on how businesses might leverage their transformative potential.

RevDate: 2025-04-02

Fabrizi A, Fiener P, Jagdhuber T, et al (2025)

Plasticulture detection at the country scale by combining multispectral and SAR satellite data.

Scientific reports, 15(1):11339.

The use of plastic films has been growing in agriculture, benefiting consumers and producers. However, concerns have been raised about the environmental impact of plastic film use, with mulching films posing a greater threat than greenhouse films. This calls for large-scale monitoring of different plastic film uses. We used cloud computing, freely available optical and radar satellite images, and machine learning to map plastic-mulched farmland (PMF) and plastic cover above vegetation (PCV) (e.g., greenhouse, tunnel) across Germany. The algorithm detected 103 10[3] ha of PMF and 37 10[3] ha of PCV in 2020, while a combination of agricultural statistics and surveys estimated a smaller plasticulture cover of around 100 10[3] ha in 2019. Based on ground observations, the overall accuracy of the classification is 85.3%. Optical and radar features had similar importance scores, and a distinct backscatter of PCV was related to metal frames underneath the plastic films. Overall, the algorithm achieved great results in the distinction between PCV and PMF. This study maps different plastic film uses at a country scale for the first time and sheds light on the high potential of freely available satellite data for continental monitoring.

RevDate: 2025-04-02

Suveena S, Rekha AA, Rani JR, et al (2025)

The translational impact of bioinformatics on traditional wet lab techniques.

Advances in pharmacology (San Diego, Calif.), 103:287-311.

Bioinformatics has taken a pivotal place in the life sciences field. Not only does it improve, but it also fine-tunes and complements the wet lab experiments. It has been a driving force in the so-called biological sciences, converting them into hypothesis and data-driven fields. This study highlights the translational impact of bioinformatics on experimental biology and discusses its evolution and the advantages it has brought to advancing biological research. Computational analyses make labor-intensive wet lab work cost-effective by reducing the use of expensive reagents. Genome/proteome-wide studies have become feasible due to the efficiency and speed of bioinformatics tools, which can hardly be compared with wet lab experiments. Computational methods provide the scalability essential for manipulating large and complex data of biological origin. AI-integrated bioinformatics studies can unveil important biological patterns that traditional approaches may otherwise overlook. Bioinformatics contributes to hypothesis formation and experiment design, which is pivotal for modern-day multi-omics and systems biology studies. Integrating bioinformatics in the experimental procedures increases reproducibility and helps reduce human errors. Although today's AI-integrated bioinformatics predictions have significantly improved in accuracy over the years, wet lab validation is still unavoidable for confirming these predictions. Challenges persist in multi-omics data integration and analysis, AI model interpretability, and multiscale modeling. Addressing these shortcomings through the latest developments is essential for advancing our knowledge of disease mechanisms, therapeutic strategies, and precision medicine.

RevDate: 2025-04-02

Das IJ, Bhatta K, Sarangi I, et al (2025)

Innovative computational approaches in drug discovery and design.

Advances in pharmacology (San Diego, Calif.), 103:1-22.

In the current scenario of pandemics, drug discovery and design have undergone a significant transformation due to the integration of advanced computational methodologies. These methodologies utilize sophisticated algorithms, machine learning, artificial intelligence, and high-performance computing to expedite the drug development process, enhances accuracy, and reduces costs. Machine learning and AI have revolutionized predictive modeling, virtual screening, and de novo drug design, allowing for the identification and optimization of novel compounds with desirable properties. Molecular dynamics simulations provide a detailed insight into protein-ligand interactions and conformational changes, facilitating an understanding of drug efficacy at the atomic level. Quantum mechanics/molecular mechanics methods offer precise predictions of binding energies and reaction mechanisms, while structure-based drug design employs docking studies and fragment-based design to improve drug-receptor binding affinities. Network pharmacology and systems biology approaches analyze polypharmacology and biological networks to identify novel drug targets and understand complex interactions. Cheminformatics explores vast chemical spaces and employs data mining to find patterns in large datasets. Computational toxicology predicts adverse effects early in development, reducing reliance on animal testing. Bioinformatics integrates genomic, proteomic, and metabolomics data to discover biomarkers and understand genetic variations affecting drug response. Lastly, cloud computing and big data technologies facilitate high-throughput screening and comprehensive data analysis. Collectively, these computational innovations are driving a paradigm shift in drug discovery and design, making it more efficient, accurate, and cost-effective.

RevDate: 2025-04-01

Erukala SB, Tokmakov D, Perumalla A, et al (2025)

A secure end-to-end communication framework for cooperative IoT networks using hybrid blockchain system.

Scientific reports, 15(1):11077.

The Internet of Things (IoT) is a disruptive technology that underpins Industry 5.0 by integrating various service technologies to enable intelligent connectivity among smart objects. These technologies enhance the convergence of Information Technology (IT), Operational Technology (OT), Core Technology (CT), and Data Technology (DT) networks, improving automation and decision-making capabilities. While cloud computing has become a mainstream technology across multiple domains, it struggles to efficiently manage the massive volume of OT data generated by IoT devices due to high latency, data transfer costs, limited resilience, and insufficient context awareness. Fog computing has emerged as a viable solution, extending cloud capabilities to the edge through a distributed peer-to-peer (P2P) network, enabling decentralized data processing and management. However, IoT networks still face critical challenges, including connectivity, heterogeneity, scalability, interoperability, security, and real-time decision-making constraints. Security is a key challenge in IoT implementations, including secure data communication, IoT edge and fog device identity, end-to-end authentication, and secure storage. This paper presents an efficient blockchain-based framework that creates a secure end-to-end communication cooperative flow IoT network. The framework utilizes a hybrid blockchain network that collaborates to offer a collaborative flow of end-to-end secure communication from end devices to cloud storage. The fog servers will maintain a private blockchain as a next-generation public key infrastructure to identify and authenticate the IoT's edge devices. The consortium blockchain will be maintained in the cloud and integrated with the permission blockchain system. This system ensures secure cloud storage, authorization, efficient key exchange, and remote protection (encryption) of all sensitive information. To improve the synchronization and block generation, reduce overhead, and ensure scalable IoT network operation, we proposed the threshold signature-based Proof of Stake and Validation (PoSV) consensus. Additionally, lightweight authentication protects resource-constrained IoT nodes using an aggregate signature, ensuring security and performance in real-time scenarios. The proposed system is implemented, and its performance is evaluated using key metrics such as cryptographic processing overhead, consensus efficiency, block acceptance time, and transaction delay. The findings show that threshold signature-based Proof of Stake and Validation (PoSV) consensus, reduces the computational burden of individual signature verification, which results in an optimized transaction latency of 80-150 ms, compared to the previous 100-200 ms without Non-PoSV. Additionally, aggregating multiple signatures from different authentication events reduces signing time by 1.98 ms compared to the individual signature time of 2.72 ms and the overhead of verifying multiple individual transactions is 2.87 ms is significantly reduced to1.46 ms along with authentication delay ranges between 95-180 ms. Hence, the proposed framework improves over existing approaches regarding linear computing complexity, increased cryptographic methods, and a more efficient consensus process.

RevDate: 2025-04-01

Li X, Shen T, Garcia CL, et al (2025)

A 30-meter resolution global land productivity dynamics dataset from 2013 to 2022.

Scientific data, 12(1):555.

Land degradation is one of the most severe environmental challenges globally. To address its adverse impacts, the United Nations endorsed the Land Degradation Neutrality (SDG 15.3) within the Sustainable Development Goals in 2015. Trends in land productivity is a key sub-indicator for reporting the progress toward SDG 15.3. Currently, the highest spatial resolution of global land productivity dynamics (LPD) products is 250-meter, which seriously hamper the SDG 15.3 reporting and intervention at the fine scale. Generating higher spatial resolution product faces significant challenges, including massive data processing, image cloud pollution, incompatible spatiotemporal resolution. This study, leveraging Google Earth Engine platform and utilizing Landsat-8 and MODIS imagery, employed the Gap-filling and Savitzky-Golay filtering algorithm and advanced spatiotemporal filtering method to obtain a high-quality 30-meter NDVI dataset, then the global 30-meter LPD product from 2013 to 2022 was generated by using the FAO-WOCAT methodology and compared against multiple datasets. This is the first global scale 30-meter LPD dataset, which provides essential data support for SDG 15.3 monitoring and reporting globally.

RevDate: 2025-04-01

Hao R, Zhao Y, Zhang S, et al (2025)

Deep Learning for Ocean Forecasting: A Comprehensive Review of Methods, Applications, and Datasets.

IEEE transactions on cybernetics, PP: [Epub ahead of print].

As a longstanding scientific challenge, accurate and timely ocean forecasting has always been a sought-after goal for ocean scientists. However, traditional theory-driven numerical ocean prediction (NOP) suffers from various challenges, such as the indistinct representation of physical processes, inadequate application of observation assimilation, and inaccurate parameterization of models, which lead to difficulties in obtaining effective knowledge from massive observations, and enormous computational challenges. With the successful evolution of data-driven deep learning in various domains, it has been demonstrated to mine patterns and deep insights from the ever-increasing stream of oceanographic spatiotemporal data, which provides novel possibilities for revolution in ocean forecasting. Deep-learning-based ocean forecasting (DLOF) is anticipated to be a powerful complement to NOP. Nowadays, researchers attempt to introduce deep learning into ocean forecasting and have achieved significant progress that provides novel motivations for ocean science. This article provides a comprehensive review of the state-of-the-art DLOF research regarding model architectures, spatiotemporal multiscales, and interpretability while specifically demonstrating the feasibility of developing hybrid architectures that incorporate theory-driven and data-driven models. Moreover, we comprehensively evaluate DLOF from datasets, benchmarks, and cloud computing. Finally, the limitations of current research and future trends of DLOF are also discussed and prospected.

RevDate: 2025-04-01

Zhu X, Lu Y, Chen Y, et al (2025)

Optical identification of marine floating debris from Sentinel-2 MSI imagery using radiation signal difference.

Optics letters, 50(7):2330-2333.

A spaceborne optical technique for marine floating debris is developed to detect, discriminate, and quantify such debris, especially that with weak optical signals. The technique uses only the top-of-atmosphere (TOA) signal based on the difference radiative transfer (DRT). DRT unveils diverse optical signals by referencing those within the neighborhood. Using DRT of either simulated signals or Sentinel-2 Multispectral Instrument (MSI) data, target types can be confirmed between the two and pinpointed on a normalized type line. The line, mostly, indicates normalized values of <0.2 for waters, 0.2-0.6 for debris, and >0.8 for algae. The classification limit for MSI is a sub-pixel fraction of 3%; above which, the boundary between debris and algae is distinct, being separated by >three standard deviations. This automated methodology unleashed TOA imagery on data cloud platforms such as Google Earth Engine (GEE) and promoted monitoring after coastal disasters, such as debris dumping and algae blooms.

RevDate: 2025-03-31

Jia Z, Fan S, Wang Z, et al (2025)

Partial discharge defect recognition method of switchgear based on cloud-edge collaborative deep learning.

Scientific reports, 15(1):10956.

To address the limitations of traditional partial discharge (PD) detection methods for switchgear, which fail to meet the requirements for real-time monitoring, rapid assessment, sample fusion, and joint analysis in practical applications, a joint PD recognition method of switchgear based on edge computing and deep learning is proposed. An edge collaborative defect identification architecture for switchgear is constructed, which includes the terminal device side, terminal collection side, edge-computing side, and cloud-computing side. The PD signal of switchgear is extracted based on UHF sensor and broadband pulse current sensor on the terminal collection side. Multidimensional features are obtained from these signals and a high-dimensional feature space is constructed based on feature extraction and dimensionality reduction on the edge-computing side. On the cloud side, the deep belief network (DBN)-based switchgear PD defect identification method is proposed and the PD samples acquired on the edge side are transmitted in real time to the cloud for training. Upon completion of the training, the resulting model is transmitted back to the edge side for inference, thereby facilitating real-time joint analysis of PD defects across multiple switchgear units. Verification of the proposed method is conducted using PD samples simulated in the laboratory. The results indicate that the DBN proposed in this paper can recognize PDs in switchgear with an accuracy of 88.03%, and under the edge computing architecture, the training time of the switchgear PD defect type classifier can be reduced by 44.28%, overcoming the challenges associated with traditional diagnostic models, which are characterized by long training durations, low identification efficiency, and weak collaborative analysis capabilities.

RevDate: 2025-03-31

Yang H, L Jiang (2025)

Regulating neural data processing in the age of BCIs: Ethical concerns and legal approaches.

Digital health, 11:20552076251326123 pii:10.1177_20552076251326123.

Brain-computer interfaces (BCIs) have seen increasingly fast growth under the help from AI, algorithms, and cloud computing. While providing great benefits for both medical and educational purposes, BCIs involve processing of neural data which are uniquely sensitive due to their most intimate nature, posing unique risks and ethical concerns especially related to privacy and safe control of our neural data. In furtherance of human right protection such as mental privacy, data laws provide more detailed and enforceable rules for processing neural data which may balance the tension between privacy protection and need of the public for wellness promotion and scientific progress through data sharing. This article notes that most of the current data laws like GDPR have not covered neural data clearly, incapable of providing full protection in response to its specialty. The new legislative reforms in the U.S. states of Colorado and California made pioneering advances to incorporate neural data into data privacy laws. Yet regulatory gaps remain as such reforms have not provided special additional rules for neural data processing. Potential problems such as static consent, vague research exceptions, and loopholes in regulating non-personal neural data need to be further addressed. We recommend relevant improved measures taken through amending data laws or making special data acts.

RevDate: 2025-03-31

Bai CM, Shu YX, S Zhang (2025)

Authenticable quantum secret sharing based on special entangled state.

Scientific reports, 15(1):10819.

In this paper, a pair of quantum states are constructed based on an orthogonal array and further generalized to multi-body quantum systems. Subsequently, a novel physical process is designed, which is aimed at effectively masking quantum states within multipartite quantum systems. According to this masker, a new authenticable quantum secret sharing scheme is proposed, which can realize a class of special access structures. In the distribution phase, an unknown quantum state is shared safely among multiple participants, and this secret quantum state is embedded into a multi-particle entangled state using the masking approach. In the reconstruction phase, a series of precisely designed measurements and corresponding unitary operations are performed by the participants in the authorized set to restore the original information quantum state. To ensure the security of the scheme, the security analysis of five major types of quantum attacks is conducted. Finally, when compared with other quantum secret sharing schemes based on entangled states, the proposed scheme is found to be not only more flexible but also easier to implement based on existing quantum computing cloud platforms.

RevDate: 2025-03-29
CmpDate: 2025-03-28

Davey BC, Billingham W, Davis JA, et al (2023)

Data resource profile: the ORIGINS project databank: a collaborative data resource for investigating the developmental origins of health and disease.

International journal of population data science, 8(1):2388.

INTRODUCTION: The ORIGINS Project ("ORIGINS") is a longitudinal, population-level birth cohort with data and biosample collections that aim to facilitate research to reduce non-communicable diseases (NCDs) and encourage 'a healthy start to life'. ORIGINS has gathered millions of datapoints and over 400,000 biosamples over 15 timepoints, antenatally through to five years of age, from mothers, non-birthing partners and the child, across four health and wellness domains: 'Growth and development', 'Medical, biological and genetic', 'Biopsychosocial and cognitive', 'Lifestyle, environment and nutrition'.

METHODS: Mothers, non-birthing partners and their offspring were recruited antenatally (between 18 and 38 weeks' gestation) from the Joondalup and Wanneroo communities of Perth, Western Australia from 2017 to 2024. Data come from several sources, including routine hospital antenatal and birthing data, ORIGINS clinical appointments, and online self-completed surveys comprising several standardised measures. Data are merged using the Medical Record Number (MRN), the ORIGINS Unique Identifier and the ORIGINS Pregnancy Number, as well as additional demographic data (e.g. name and date of birth) when necessary.

RESULTS: The data are held on an integrated data platform that extracts, links, ingests, integrates and stores ORIGINS' data on an Amazon Web Services (AWS) cloud-based data warehouse. Data are linked, transformed for cleaning and coding, and catalogued, ready to provide to sub-projects (independent researchers that apply to use ORIGINS data) to prepare for their own analyses. ORIGINS maximises data quality by checking and replacing missing and erroneous data across the various data sources.

CONCLUSION: As a wide array of data across several different domains and timepoints has been collected, the options for future research and utilisation of the data and biosamples are broad. As ORIGINS aims to extend into middle childhood, researchers can examine which antenatal and early childhood factors predict middle childhood outcomes. ORIGINS also aims to link to State and Commonwealth data sets (e.g. Medicare, the National Assessment Program - Literacy and Numeracy, the Pharmaceutical Benefits Scheme) which will cater to a wide array of research questions.

RevDate: 2025-03-28
CmpDate: 2025-03-28

Steiner M, F Huettmann (2025)

Moving beyond the physical impervious surface impact and urban habitat fragmentation of Alaska: quantitative human footprint inference from the first large scale 30 m high-resolution Landscape metrics big data quantification in R and the cloud.

PeerJ, 13:e18894.

With increased globalization, man-made climate change, and urbanization, the landscape-embedded within the Anthropocene-becomes increasingly fragmented. With wilderness habitats transitioning and getting lost, globally relevant regions considered 'pristine', such as Alaska, are no exception. Alaska holds 60% of the U.S. National Park system's area and is of national and international importance, considering the U.S. is one of the wealthiest nations on earth. These characteristics tie into densities and quantities of human features, e.g., roads, houses, mines, wind parks, agriculture, trails, etc., that can be summarized as 'impervious surfaces.' Those are physical impacts and actively affecting urban-driven landscape fragmentation. Using the remote sensing data of the National Land Cover Database (NLCD), here we attempt to create the first quantification of this physical human impact on the Alaskan landscape and its fragmentation. We quantified these impacts using the well-established landscape metrics tool 'Fragstats', implemented as the R package "landscapemetrics" in the desktop software and through the interface of a Linux Cloud-computing environment. This workflow allows for the first time to overcome the computational limitations of the conventional Fragstats software within a reasonably quick timeframe. Thereby, we are able to analyze a land area as large as approx. 1,517,733 km[2] (state of Alaska) while maintaining a high assessment resolution of 30 m. Based on this traditional methodology, we found that Alaska has a reported physical human impact of c. 0.067%. We additionally overlaid other features that were not included in the input data to highlight the overall true human impact (e.g., roads, trails, airports, governance boundaries in game management and park units, mines, etc.). We found that using remote sensing (human impact layers), Alaska's human impact is considerably underestimated to a meaningless estimate. The state is more seriously fragmented and affected by humans than commonly assumed. Very few areas are truly untouched and display a high patch density with corresponding low mean patch sizes throughout the study area. Instead, the true human impact is likely close to 100% throughout Alaska for several metrics. With these newly created insights, we provide the first state-wide landscape data and inference that are likely of considerable importance for land management entities in the state of Alaska, and for the U.S. National Park systems overall, especially in the changing climate. Likewise, the methodological framework presented here shows an Open Access workflow and can be used as a reference to be reproduced virtually anywhere else on the planet to assess more realistic large-scale landscape metrics. It can also be used to assess human impacts on the landscape for more sustainable landscape stewardship and mitigation in policy.

RevDate: 2025-03-28

Chaikovsky I, Dziuba D, Kryvova O, et al (2025)

Subtle changes on electrocardiogram in severe patients with COVID-19 may be predictors of treatment outcome.

Frontiers in artificial intelligence, 8:1561079.

BACKGROUND: Two years after the COVID-19 pandemic, it became known that one of the complications of this disease is myocardial injury. Electrocardiography (ECG) and cardiac biomarkers play a vital role in the early detection of cardiovascular complications and risk stratification. The study aimed to investigate the value of a new electrocardiographic metric for detecting minor myocardial injury in patients during COVID-19 treatment.

METHODS: The study was conducted in 2021. A group of 26 patients with verified COVID-19 diagnosis admitted to the intensive care unit for infectious diseases was examined. The severity of a patient's condition was calculated using the NEWS score. The digital ECGs were repeatedly recorded (at the beginning and 2-4 times during the treatment). A total of 240 primary and composite ECG parameters were analyzed for each electrocardiogram. Among these patients, 6 patients died during treatment. Cluster analysis was used to identify subgroups of patients that differed significantly in terms of disease severity (NEWS), SрО2 and integral ECG index (an indicator of the state of the cardiovascular system).

RESULTS: Using analysis of variance (ANOVA repeated measures), a statistical assessment of changes of indicators in subgroups at the end of treatment was given. These subgroup differences persisted at the end of the treatment. To identify potential predictors of mortality, critical clinical and ECG parameters of surviving (S) and non-surviving patients (D) were compared using parametric and non-parametric statistical tests. A decision tree model to classify survival in patients with COVID-19 was constructed based on partial ECG parameters and NEWS score.

CONCLUSION: A comparison of potential mortality predictors showed no significant differences in vital signs between survivors and non-survivors at the beginning of treatment. A set of ECG parameters was identified that were significantly associated with treatment outcomes and may be predictors of COVID-19 mortality: T-wave morphology (SVD), Q-wave amplitude, and R-wave amplitude (lead I).

RevDate: 2025-03-27

Kodumuru R, Sarkar S, Parepally V, et al (2025)

Artificial Intelligence and Internet of Things Integration in Pharmaceutical Manufacturing: A Smart Synergy.

Pharmaceutics, 17(3): pii:pharmaceutics17030290.

Background: The integration of artificial intelligence (AI) with the internet of things (IoTs) represents a significant advancement in pharmaceutical manufacturing and effectively bridges the gap between digital and physical worlds. With AI algorithms integrated into IoTs sensors, there is an improvement in the production process and quality control for better overall efficiency. This integration facilitates enabling machine learning and deep learning for real-time analysis, predictive maintenance, and automation-continuously monitoring key manufacturing parameters. Objective: This paper reviews the current applications and potential impacts of integrating AI and the IoTs in concert with key enabling technologies like cloud computing and data analytics, within the pharmaceutical sector. Results: Applications discussed herein focus on industrial predictive analytics and quality, underpinned by case studies showing improvements in product quality and reductions in downtime. Yet, many challenges remain, including data integration and the ethical implications of AI-driven decisions, and most of all, regulatory compliance. This review also discusses recent trends, such as AI in drug discovery and blockchain for data traceability, with the intent to outline the future of autonomous pharmaceutical manufacturing. Conclusions: In the end, this review points to basic frameworks and applications that illustrate ways to overcome existing barriers to production with increased efficiency, personalization, and sustainability.

RevDate: 2025-03-26

Hussain A, Aleem M, Ur Rehman A, et al (2025)

DE-RALBA: dynamic enhanced resource aware load balancing algorithm for cloud computing.

PeerJ. Computer science, 11:e2739.

Cloud computing provides an opportunity to gain access to the large-scale and high-speed resources without establishing your own computing infrastructure for executing the high-performance computing (HPC) applications. Cloud has the computing resources (i.e., computation power, storage, operating system, network, and database etc.) as a public utility and provides services to the end users on a pay-as-you-go model. From past several years, the efficient utilization of resources on a compute cloud has become a prime interest for the scientific community. One of the key reasons behind inefficient resource utilization is the imbalance distribution of workload while executing the HPC applications in a heterogenous computing environment. The static scheduling technique usually produces lower resource utilization and higher makespan, while the dynamic scheduling achieves better resource utilization and load-balancing by incorporating a dynamic resource pool. The dynamic techniques lead to increased overhead by requiring a continuous system monitoring, job requirement assessments and real-time allocation decisions. This additional load has the potential to impact the performance and responsiveness on computing system. In this article, a dynamic enhanced resource-aware load balancing algorithm (DE-RALBA) is proposed to mitigate the load-imbalance in job scheduling by considering the computing capabilities of all VMs in cloud computing. The empirical assessments are performed on CloudSim simulator using instances of two scientific benchmark datasets (i.e., heterogeneous computing scheduling problems (HCSP) instances and Google Cloud Jobs (GoCJ) dataset). The obtained results revealed that the DE-RALBA mitigates the load imbalance and provides a significant improvement in terms of makespan and resource utilization against existing algorithms, namely PSSLB, PSSELB, Dynamic MaxMin, and DRALBA. Using HCSP instances, the DE-RALBA algorithm achieves up to 52.35% improved resources utilization as compared to existing technique, while more superior resource utilization is achieved using the GoCJ dataset.

RevDate: 2025-03-26

Ramezani R, Iranmanesh S, Naeim A, et al (2025)

Editorial: Bench to bedside: AI and remote patient monitoring.

Frontiers in digital health, 7:1584443.

RevDate: 2025-03-26

Evangelista JE, Ali-Nasser T, Malek LE, et al (2025)

lncRNAlyzr: Enrichment Analysis for lncRNA Sets.

Journal of molecular biology pii:S0022-2836(25)00004-X [Epub ahead of print].

lncRNAs make up a large portion of the human genome affecting many biological processes in normal physiology and diseases. However, human lncRNAs are understudied compared to protein-coding genes. While there are many tools for performing gene set enrichment analysis for coding genes, few tools exist for lncRNA enrichment analysis. lncRNAlyzr is a webserver application designed for lncRNAs enrichment analysis. lncRNAlyzr has a database containing 33 lncRNA set libraries created by computing correlations between lncRNAs and annotated coding gene sets. After users submit a set of lncRNAs to lncRNAlyzr, the enrichment analysis results are visualized as ball-and-stick subnetworks where nodes are lncRNAs connected to enrichment terms from across selected lncRNA set libraries. To demonstrate lncRNAlyzr, it was used to analyze the effects of knocking down the lncRNA CYTOR in K562 cells. Overall, lncRNAlyzr is an enrichment analysis tool for lncRNAs aiming to further our understanding of lncRNAs functional modules. lncRNAlyzr is available from: https://lncrnalyzr.maayanlab.cloud.

RevDate: 2025-03-27
CmpDate: 2025-03-26

Sng LMF, Kaphle A, O'Brien MJ, et al (2025)

Optimizing UK biobank cloud-based research analysis platform to fine-map coronary artery disease loci in whole genome sequencing data.

Scientific reports, 15(1):10335.

We conducted the first comprehensive association analysis of a coronary artery disease (CAD) cohort within the recently released UK Biobank (UKB) whole genome sequencing dataset. We employed fine mapping tool PolyFun and pinpoint rs10757274 as the most likely causal SNV within the 9p21.3 CAD risk locus. Notably, we show that machine-learning (ML) approaches, REGENIE and VariantSpark, exhibited greater sensitivity compared to traditional single-SNV logistic regression, uncovering rs28451064 a known risk locus in 21q22.11. Our findings underscore the utility of leveraging advanced computational techniques and cloud-based resources for mega-biobank analyses. Aligning with the paradigm shift of bringing compute to data, we demonstrate a 44% cost reduction and 94% speedup through compute architecture optimisation on UK Biobank's Research Analysis Platform using our RAPpoet approach. We discuss three considerations for researchers implementing novel workflows for datasets hosted on cloud-platforms, to pave the way for harnessing mega-biobank-sized data through scalable, cost-effective cloud computing solutions.

RevDate: 2025-03-20

Madan B, Nair S, Katariya N, et al (2025)

Smart waste management and air pollution forecasting: Harnessing Internet of things and fully Elman neural network.

Waste management & research : the journal of the International Solid Wastes and Public Cleansing Association, ISWA [Epub ahead of print].

As the Internet of things (IoT) continues to transform modern technologies, innovative applications in waste management and air pollution monitoring are becoming critical for sustainable development. In this manuscript, a novel smart waste management (SWM) and air pollution forecasting (APF) system is proposed by leveraging IoT sensors and the fully Elman neural network (FENN) model, termed as SWM-APF-IoT-FENN. The system integrates real-time data from waste and air quality sensors including weight, trash level, odour and carbon monoxide (CO) that are collected from smart bins connected to a Google Cloud Server. Here, the MaxAbsScaler is employed for data normalization, ensuring consistent feature representation. Subsequently, the atmospheric contaminants surrounding the waste receptacles were observed using a FENN model. This model is utilized to predict the atmospheric concentration of CO and categorize the bin status as filled, half-filled and unfilled. Moreover, the weight parameter of the FENN model is tuned using the secretary bird optimization algorithm for better prediction results. The implementation of the proposed methodology is done in Python tool, and the performance metrics are analysed. Experimental results demonstrate significant improvements in performance, achieving 15.65%, 18.45% and 21.09% higher accuracy, 18.14%, 20.14% and 24.01% higher F-Measure, 23.64%, 24.29% and 29.34% higher False Acceptance Rate (FAR), 25.00%, 27.09% and 31.74% higher precision, 20.64%, 22.45% and 28.64% higher sensitivity, 26.04%, 28.65% and 32.74% higher specificity, 9.45%, 7.38% and 4.05% reduced computational time than the conventional approaches such as Elman neural network, recurrent artificial neural network and long short-term memory with gated recurrent unit, respectively. Thus, the proposed method offers a streamlined, efficient framework for real-time waste management and pollution forecasting, addressing critical environmental challenges.

RevDate: 2025-03-20

Isaac RA, Sundaravadivel P, Marx VSN, et al (2025)

Enhanced novelty approaches for resource allocation model for multi-cloud environment in vehicular Ad-Hoc networks.

Scientific reports, 15(1):9472.

As the number of service requests for applications continues increasing due to various conditions, the limitations on the number of resources provide a barrier in providing the applications with the appropriate Quality of Service (QoS) assurances. As a result, an efficient scheduling mechanism is required to determine the order of handling application requests, as well as the appropriate use of a broadcast media and data transfer. In this paper an innovative approach, incorporating the Crossover and Mutation (CM)-centered Marine Predator Algorithm (MPA) is introduced for an effective resource allocation. This strategic resource allocation optimally schedules resources within the Vehicular Edge computing (VEC) network, ensuring the most efficient utilization. The proposed method begins by the meticulous feature extraction from the Vehicular network model, with attributes such as mobility patterns, transmission medium, bandwidth, storage capacity, and packet delivery ratio. For further analysis the Elephant Herding Lion Optimizer (EHLO) algorithm is employed to pinpoint the most critical attributes. Subsequently the Modified Fuzzy C-Means (MFCM) algorithm is used for efficient vehicle clustering centred on selected attributes. These clustered vehicle characteristics are then transferred and stored within the cloud server infrastructure. The performance of the proposed methodology is evaluated using MATLAB software using simulation method. This study offers a comprehensive solution to the resource allocation challenge in Vehicular Cloud Networks, addresses the burgeoning demands of modern applications while ensuring QoS assurances and signifies a significant advancement in the field of VEC.

RevDate: 2025-03-20

Rajavel R, Krishnasamy L, Nagappan P, et al (2025)

Cloud-enabled e-commerce negotiation framework using bayesian-based adaptive probabilistic trust management model.

Scientific reports, 15(1):9457.

Enforcing a trust management model in the broker-based negotiation context is identified as a foremost challenge. Creating such trust model is not a pure technical issue, but the technology should enhance the cloud service negotiation framework for improving the utility value and success rate between the bargaining participants (consumer, broker, and service provider) during their negotiation progression. In the existing negotiation frameworks, trusts were established using reputation, self-assessment, identity, evidence, and policy-based evaluation techniques for maximizing the negotiators (cloud participants) utility value and success rate. To further maximization, a Bayesian-based adaptive probabilistic trust management model is enforced in the future broker-based trusted cloud service negotiation framework. This adaptive model dynamically ranks the service provider agents by estimating the success rate, cooperation rate and honesty rate factors to effectively measure the trustworthiness among the participants. The measured trustworthiness value will be used by the broker agents for prioritization of trusted provider agents over the non-trusted provider agents which minimizes the bargaining conflict between the participants and enhance future bargaining progression. In addition, the proposed adaptive probabilistic trust management model formulates the sequence of bilateral negotiation process among the participants as a Bayesian learning process. Finally, the performance of the projected cloud-enabled e-commerce negotiation framework with Bayesian-based adaptive probabilistic trust management model is compared with the existing frameworks by validating under different levels of negotiation rounds.

RevDate: 2025-03-20
CmpDate: 2025-03-20

Savitha C, R Talari (2025)

Evaluating the performance of random forest, support vector machine, gradient tree boost, and CART for improved crop-type monitoring using greenest pixel composite in Google Earth Engine.

Environmental monitoring and assessment, 197(4):437.

The development of machine learning algorithms, along with high-resolution satellite datasets, aids in improved agriculture monitoring and mapping. Nevertheless, the use of high-resolution optical satellite datasets is usually constrained by clouds and shadows, which do not capture complete crop phenology, thus limiting map accuracy. Moreover, the identification of a suitable classification algorithm is essential, as the performance of each machine learning algorithm depends on input datasets, hyperparameter tuning, training, and testing samples, among other factors. To overcome the limitation of clouds and shadow in optical data, this study employs Sentinel-2 greenest pixel composite to generate a nearly accurate crop-type map for an agricultural watershed in Tadepalligudem, India. To identify a suitable machine learning model, the study also evaluates and compares the performance of four machine learning algorithms: gradient tree boost, classification and regression tree, support vector machine, and random forest (RF). Crop-type maps are generated for two cropping seasons, Kharif and Rabi, in Google Earth Engine (GEE), a robust cloud computing platform. Further, to train and test these algorithms, ground truth data is collected and divided in the ratio of 70:30, for training and testing, respectively. The results of the study demonstrated the ability of the greenest pixel composite method to identify and map crop types in small watersheds even during the Kharif season. Further, among the four machine learning algorithms employed, RF is shown to outperform other classification algorithms in both Kharif and Rabi seasons, with an average overall accuracy of 93.21% and a kappa coefficient of 0.89. Furthermore, the study showcases the potential of the cloud computing platform GEE in enhancing automatic agricultural monitoring through satellite datasets while requiring minimal computational storage and processing.

RevDate: 2025-03-19

Ding X, Liu Y, Ning J, et al (2025)

Blockchain-Enhanced Anonymous Data Sharing Scheme for 6G-Enabled Smart Healthcare With Distributed Key Generation and Policy Hiding.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

In recent years, cloud computing has seen widespread application in 6G-enabled smart healthcare, which facilitates the sharing of medical data. Before uploading medical data to cloud server, numerous data sharing schemes employ attribute-based encryption (ABE) to encrypt the sensitive medical data of data owner (DO), and only provide access to date user (DU) who meet certain conditions, which leads to privacy leakage and single points of failure, etc. This paper proposes a blockchain-enhanced anonymous data sharing scheme for 6G-enabled smart healthcare with distributed key generation and policy hiding, termed BADS-ABE, which achieves secure and efficient sharing of sensitive medical data. BADS-ABE designs an anonymous authentication scheme based on Groth signature, which ensures integrity of medical data and protects the identity privacy of DO. Meanwhile, BADS-ABE employs smart contract and Newton interpolation to achieve distributed key generation, which eliminates single point of failure due to the reliance on trusted authority (TA). Moreover, BADS-ABE achieves policy hiding and matching, which avoids the waste of decryption resources and protects the attribute privacy of DO. Finally, security analysis demonstrates that BADS-ABE meets the security requirements of a data sharing scheme for smart healthcare. Performance analysis indicates that BADS-ABE is more efficient compared with similar data sharing schemes.

RevDate: 2025-03-19

Han X, Wang J, Wu J, et al (2025)

Energy-efficient cloud systems: Virtual machine consolidation with Γ -robustness optimization.

iScience, 28(3):111897.

This study addresses the challenge of virtual machine (VM) placement in cloud computing to improve resource utilization and energy efficiency. We propose a mixed integer linear programming (MILP) model incorporating Γ -robustness theory to handle uncertainties in VM usage, optimizing both performance and energy consumption. A heuristic algorithm is developed for large-scale VM allocation. Experiments with Huawei Cloud data demonstrate significant improvements in resource utilization and energy efficiency.

RevDate: 2025-03-19

Sarkar C, Das A, RK Jain (2025)

Development of CoAP protocol for communication in mobile robotic systems using IoT technique.

Scientific reports, 15(1):9269.

This paper proposes a novel design methodology of Constrained Application Protocol (CoAP) protocol for an IoT-enabled mobile robot system to operate remotely and access wirelessly. These devices can be used in different applications such as monitoring, inspection, robotics, healthcare, etc. For communicating with such devices, the different frameworks of IoT can be deployed to attain secured transmission using different protocols such as HTTP, MQTT, CoAP, etc. In this paper, the novel IoT-enabled communication using the CoAP protocol in mobile robotic systems is attempted. A mathematical analysis of the CoAP model is carried out where this protocol provides a faster response within less time and less power consumption as compared to other protocols. The main advantage of the CoAP protocol is to facilitate Machine-to-Machine (M2M) communication which contains features like small packet overhead and less power consumption. An experimental prototype has been developed and several trials have been conducted to evaluate the CoAP protocol's performance for rapid communication within the mobile robotic system. Signal strength analysis is also carried out. This reveals that the reliability of sending signals is up to 99%. Thus, the application of the CoAP protocol shows enough potential to develop IoT-enabled mobile robotic systems and allied applications.

RevDate: 2025-03-17

Liu G, Lei J, Guo Z, et al (2025)

Lightweight obstacle detection for unmanned mining trucks in open-pit mines.

Scientific reports, 15(1):9028.

This paper aims to solve the problem of the difficulty in balancing the model size and detection accuracy of the unmanned mining truck detection network in open-pit mines, as well as the problem that the existing model is not suitable for mining truck equipment. To address this problem, we proposed a lightweight vehicle detection algorithm model based on the improvement of YOLOv8. Through a series of innovative structural adjustments and optimization strategies, the model has achieved high accuracy and low complexity. This paper replaces the backbone network of YOLOv8s with the FasterNet_t0 (FN) network. This network has the advantages of simple structure and high lightweight, which effectively reduces the amount of calculation and parameters of the model. Then the feature extraction structure of the YOLOv8 neck is replaced with the BiFPN (Bi-directional Feature Pyramid Network) structure. By increasing cross-layer connections and removing nodes with low contribution to feature fusion, the fusion and utilization of features of different scales are optimized, the model performance is further improved, and the number of parameters and calculations are reduced. To make up for the possible loss of accuracy caused by lightweight improvements, this paper replaces the detection head with Dynamic Head. This design can introduce the self-attention mechanism from the three dimensions of scale, space, and task, significantly improving the detection accuracy of the model while avoiding the additional computational burden. In terms of loss function, this paper introduces a combination of SIoU loss and NWD (normalized Gaussian Wasserstein distance) loss. These two adjustments enable the model to cope with different scenarios more accurately, especially the detection effect of small target mining trucks is significantly improved. In addition, this paper also adopts the amplitude-based layer adaptive sparse pruning algorithm (LAMP) to further compress the model size while maintaining efficient detection performance. Through this pruning strategy, the model further reduces its dependence on computing resources while maintaining key performance. In the experimental part, a dataset of 3000 images was first constructed, and these images were preprocessed, including image enhancement, denoising, cropping, and scaling. The experimental environment was set up on the Autodl cloud server, using the PyTorch 2.5.1 framework and Python 3.10 environment. Through four sets of ablation experiments, we verified the specific impact of each improvement on the model performance. The experimental results show that the lightweight improvement strategy significantly improves the detection accuracy of the model, while greatly reducing the number of parameters and calculations of the model. Finally, we conducted a comprehensive comparative analysis of the improved YOLOv8s model with other popular algorithms and models. The results show that our model leads in detection accuracy with 76.9%, which is more than 10% higher than the performance of similar models. At the same time, compared with other models that achieve similar accuracy levels, our model is only about 20% of the size. These results fully prove that the improvement strategy we adopted is feasible and has obvious advantages in improving model efficiency.

RevDate: 2025-03-15

Lee H, K Jun (2025)

Range dependent Hamiltonian algorithms for numerical QUBO formulation.

Scientific reports, 15(1):8819.

With the advent and development of quantum computers, various quantum algorithms that can solve linear equations and eigenvalues faster than classical computers have been developed. In particular, a hybrid solver provided by D-Wave's Leap quantum cloud service can utilize up to two million variables. Using this technology, quadratic unconstrained binary optimization (QUBO) models have been proposed for linear systems, eigenvalue problems, RSA cryptosystems, and computed tomography (CT) image reconstructions. Generally, QUBO formulation is obtained through simple arithmetic operations, which offers great potential for future development with the progress of quantum computers. A common method here was to binarize the variables and match them to multiple qubits. To achieve the accuracy of 64 bits per variable, 64 logical qubits must be used. Finding the global minimum energy in quantum optimization becomes more difficult as more logical qubits are used; thus, a quantum parallel computing algorithm that can create and compute multiple QUBO models is introduced here. This new algorithm divides the entire domain each variable can have into multiple subranges to generate QUBO models. This paper demonstrates the superior performance of this new algorithm particularly when utilizing an algorithm for binary variables.

RevDate: 2025-03-14

Weicken E, Mittermaier M, Hoeren T, et al (2025)

[Focus: artificial intelligence in medicine-Legal aspects of using large language models in clinical practice].

Innere Medizin (Heidelberg, Germany) [Epub ahead of print].

BACKGROUND: The use of artificial intelligence (AI) and natural language processing (NLP) methods in medicine, particularly large language models (LLMs), offers opportunities to advance the healthcare system and patient care in Germany. LLMs have recently gained importance, but their practical application in hospitals and practices has so far been limited. Research and implementation are hampered by a complex legal situation. It is essential to research LLMs in clinical studies in Germany and to develop guidelines for users.

OBJECTIVE: How can foundations for the data protection-compliant use of LLMs, particularly cloud-based LLMs, be established in the German healthcare system? The aim of this work is to present the data protection aspects of using cloud-based LLMs in clinical research and patient care in Germany and the European Union (EU); to this end, key statements of a legal opinion on this matter are considered. Insofar as the requirements for use are regulated by state laws (vs. federal laws), the legal situation in Berlin is used as a basis.

MATERIALS AND METHODS: As part of a research project, a legal opinion was commissioned to clarify the data protection aspects of the use of LLMs with cloud-based solutions at the Charité - University Hospital Berlin, Germany. Specific questions regarding the processing of personal data were examined.

RESULTS: The legal framework varies depending on the type of data processing and the relevant federal state (Bundesland). For anonymous data, data protection requirements need not apply. Where personal data is processed, it should be pseudonymized if possible. In the research context, patient consent is usually required to process their personal data, and data processing agreements must be concluded with the providers. Recommendations originating from LLMs must always be reviewed by medical doctors.

CONCLUSIONS: The use of cloud-based LLMs is possible as long as data protection requirements are observed. The legal framework is complex and requires transparency from providers. Future developments could increase the potential of AI and particularly LLMs in everyday clinical practice; however, clear legal and ethical guidelines are necessary.

RevDate: 2025-03-14

Lv F (2025)

Research on optimization strategies of university ideological and political parenting models under the empowerment of digital intelligence.

Scientific reports, 15(1):8680.

The development of big data, artificial intelligence, cloud computing, and other new generations of intellectual technologies has triggered digital changes in university civic education's resources, forms, and modes. It has become a new engine to promote the innovation and development of the civic education model. Digital and intellectual technology-enabled university civic and political education model can carry the concept of innovation through the subject, content, process, and scene of education and promote the development of the ideological and political parenting model in the direction of refinement, specialization, and conscientization. Based on the differential game model, this paper comprehensively considers the model characteristics of universities, enterprises, and governments and their intertemporal characteristics of collaborative parenting and innovation behaviors. It constructs the no-incentive, cost-sharing, and collaborative cooperation models, respectively, and obtains the optimal trajectories for the degree of effort, the subsidy coefficient, the optimal benefit function, and the digital and intelligent technology stock. The conclusions are as follows: (1) resource input cost and technological innovation cost are the key driving variables of university ideological and political parenting; (2) the government's cost subsidy improves the degree of innovation effort of universities and enterprises, and thus achieves Pareto optimality for the three parties; (3) the degree of innovation effort, overall benefit and technology level of the three parties in the synergistic cooperation model is better than that of the other two models. Finally, the validity of the model is verified through numerical simulation analysis. An in-depth discussion of the digital intelligence-enabled ideological and political parenting model is necessary for the high-quality development of education, which helps improve the scientific and practical ideological and political parenting in the digital age.

RevDate: 2025-03-13

Alsharabi N, Alayba A, Alshammari G, et al (2025)

An end-to-end four tier remote healthcare monitoring framework using edge-cloud computing and redactable blockchain.

Computers in biology and medicine, 189:109987 pii:S0010-4825(25)00338-5 [Epub ahead of print].

The Medical Internet of Things (MIoTs) encompasses compact, energy-efficient wireless sensor devices designed to monitor patients' body outcomes. Healthcare networks provide constant data monitoring, enabling patients to live independently. Despite advancements in MIoTs, critical issues persist that can affect the Quality of Service (QoS) in the network. The wearable IoT module collects data and stores it on cloud servers, making it vulnerable to privacy breaches and attacks by unauthorized users. To address these challenges, we propose an end-to-end secure remote healthcare framework called the Four Tier Remote Healthcare Monitoring Framework (FTRHMF). This framework comprises multiple entities, including Wireless Body Sensors (WBS), Distributed Gateway (DGW), Distributed Edge Server (DES), Blockchain Server (BS), and Cloud Server (CS). The framework operates in four tiers. In the first tier, WBS and DGW are authenticated to the BS using secret credentials, ensuring privacy and security for all entities. In the second tier, authenticated WBS transmit data to the DGW via a two-level Hybridized Metaheuristic Secure Federated Clustered Routing Protocol (HyMSFCRP), which leverages Mountaineering Team-Based Optimization (MTBO) and Sea Horse Optimization (SHO) algorithms. In the third tier, sensor reports are prioritized and analyzed using Multi-Agent Deep Reinforcement Learning (MA-DRL), with the results fed into the Hybrid-Transformer Deep Learning (HTDL) model. This model combines Lite Convolutional Neural Network and Swin Transformer networks to detect patient outcomes accurately. Finally, in the fourth tier, patients' outcomes are securely stored in a cloud-assisted redactable blockchain layer, allowing modifications without compromising the integrity of the original data. This research enhance the network lifetime by 18.3 %, reduce the transmission delays by 15.6 %, ensures classification accuracy of 7.4 %, with PSNR of 46.12 dB, SSIM of 0.8894, and MAE of 22.51 when compared to the existing works.

RevDate: 2025-03-13

Alsaleh A (2025)

Toward a conceptual model to improve the user experience of a sustainable and secure intelligent transport system.

Acta psychologica, 255:104892 pii:S0001-6918(25)00205-7 [Epub ahead of print].

The rapid advancement of automotive technologies has spurred the development of innovative applications within intelligent transportation systems (ITS), aimed at enhancing safety, efficiency and sustainability. These applications, such as advanced driver assistance systems (ADAS), vehicle-to-everything (V2X) communication and autonomous driving, are transforming transportation by enabling adaptive cruise control, lane-keeping assistance, real-time traffic management and predictive maintenance. By leveraging cloud computing and vehicular networks, intelligent transportation solutions optimize traffic flow, improve emergency response systems, and forecast potential collisions, contributing to safer and more efficient roads. This study proposes a Vehicular Cloud-based Intelligent Transportation System (VCITS) model, integrating vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication through roadside units (RSUs) and cloudlets to provide real-time access to cloud resources. A novel search and management protocol, supported by a tailored algorithm, was developed to enhance resource allocation success rates for vehicles within a defined area of interest. The study also identifies critical security vulnerabilities in smart vehicle networks, emphasizing the need for robust solutions to protect data integrity and privacy. The simulation experiments evaluated the VCITS model under various traffic densities and resource request scenarios. Results demonstrated that the proposed model effectively maintained service availability rates exceeding 85 % even under high demand. Furthermore, the system exhibited scalability and stability, with minimal service loss and efficient handling of control messages. These findings highlight the potential of the VCITS model to advance smart traffic management while addressing computational efficiency and security challenges. Future research directions include integrating cybersecurity measures and leveraging emerging technologies like 5G and 6G to further enhance system performance and safety.

RevDate: 2025-03-13

Zinchenko A, Fernandez-Gamiz U, Redchyts D, et al (2025)

An efficient parallelization technique for the coupled problems of fluid, gas and plasma mechanics in the grid environment.

Scientific reports, 15(1):8629.

The development of efficient parallelization strategies for numerical simulation methods of fluid, gas and plasma mechanics remains one of the key technology challenges in modern scientific computing. The numerical models of gas and plasma dynamics based on the Navier-Stokes and electrodynamics equations require enormous computational efforts. For such cases, the use of parallel and distributed computing proved to be effective. The Grid computing environment could provide virtually unlimited computational resources and data storage, convenient task launch and monitoring tools, graphical user interfaces such as web portals and visualization systems. However, the deployment of traditional CFD solvers in the Grid environment remains very limited because basically it requires the cluster computing architecture. This study explores the applicability of distributed computing and Grid technologies for solving the weak-coupled problems of fluid, gas and plasma mechanics, including techniques of flow separation control like using plasma actuators to influence boundary layer structure. The adaptation techniques for the algorithms of coupled computational fluid dynamics and electrodynamics problems for distributed computations on grid and cloud infrastructure are presented. A parallel solver suitable for the Grid infrastructure has been developed and the test calculations in the distributed computing environment are performed. The simulation results for partially ionized separated flow behind the circular cylinder are analysed. Discussion includes some performance metrics and parallelization effectiveness estimation. The potential of the Grid infrastructure to provide a powerful and flexible computing environment for fast and efficient solution of weak-coupled problems of fluid, gas and plasma mechanics has been shown.

RevDate: 2025-03-13
CmpDate: 2025-03-13

Puchala S, Muchnik E, Ralescu A, et al (2025)

Automated detection of spreading depolarizations in electrocorticography.

Scientific reports, 15(1):8556.

Spreading depolarizations (SD) in the cerebral cortex are a novel mechanism of lesion development and worse outcomes after acute brain injury, but accurate diagnosis by neurophysiology is a barrier to more widespread application in neurocritical care. Here we developed an automated method for SD detection by training machine-learning models on electrocorticography data from a 14-patient cohort that included 1,548 examples of SD direct-current waveforms as identified in expert manual scoring. As determined by leave-one-patient-out cross-validation, optimal performance was achieved with a gradient-boosting model using 30 features computed from 400-s electrocorticography segments sampled at 0.1 Hz. This model was applied to continuous electrocorticography data by generating a time series of SD probability [PSD(t)], and threshold PSD(t) values to trigger SD predictions were determined empirically. The developed algorithm was then tested on a novel dataset of 10 patients, resulting in 1,252 true positive detections (/1,953; 64% sensitivity) and 323 false positives (6.5/day). Secondary manual review of false positives showed that a majority (224, or 69%) were likely real SDs, highlighting the conservative nature of expert scoring and the utility of automation. SD detection using sparse sampling (0.1 Hz) is optimal for streaming and use in cloud computing applications for neurocritical care.

RevDate: 2025-03-12

Krishna K (2025)

Advancements in cache management: a review of machine learning innovations for enhanced performance and security.

Frontiers in artificial intelligence, 8:1441250.

Machine learning techniques have emerged as a promising tool for efficient cache management, helping optimize cache performance and fortify against security threats. The range of machine learning is vast, from reinforcement learning-based cache replacement policies to Long Short-Term Memory (LSTM) models predicting content characteristics for caching decisions. Diverse techniques such as imitation learning, reinforcement learning, and neural networks are extensively useful in cache-based attack detection, dynamic cache management, and content caching in edge networks. The versatility of machine learning techniques enables them to tackle various cache management challenges, from adapting to workload characteristics to improving cache hit rates in content delivery networks. A comprehensive review of various machine learning approaches for cache management is presented, which helps the community learn how machine learning is used to solve practical challenges in cache management. It includes reinforcement learning, deep learning, and imitation learning-driven cache replacement in hardware caches. Information on content caching strategies and dynamic cache management using various machine learning techniques in cloud and edge computing environments is also presented. Machine learning-driven methods to mitigate security threats in cache management have also been discussed.

RevDate: 2025-03-12

Alyas T, Abbas Q, Niazi S, et al (2025)

Multi blockchain architecture for judicial case management using smart contracts.

Scientific reports, 15(1):8471.

The infusion of technology across various domains, particularly in process-centric and multi-stakeholder sectors, demands transparency, accuracy, and scalability. This paper introduces a blockchain and intelligent contract-based framework for judicial case management, proposing a private-to-public blockchain approach to establish a transparent, decentralized, and robust system. An Integrated Solution for Judicial Case Management using Blockchain Technology and Smart Contracts. This paper aims to introduce a multi-blockchain structure for managing judicial cases based on smart contracts, ultimately rendering cases more transparent, distributed, and tenacious. This solution is innovative because it will leverage both private and public blockchains to satisfy the unique requirements of judicial processes, with transparent public access for authorized digital events and transactions occurring on the freely available blockchain and a three-tiered private blockchain structure to address private stakeholder interactions while ensuring that operational consistency, security, and data privacy requirements are met. Leveraging the decentralized and tamper-proof approach of blockchain and cloud computing, the framework aims to increase data security and cut down on administrative burdens. This framework offers a scalable and secure solution for modernizing judicial systems, supporting smart governance's shift towards digital transparency and accountability.

RevDate: 2025-03-11

Bedia SV, Shapurwala MA, Kharge BP, et al (2025)

A Comprehensive Guide to Implement Artificial Intelligence Cloud Solutions in a Dental Clinic: A Review.

Cureus, 17(2):e78718.

Integrating the artificial intelligence (AI) cloud into dental clinics can enhance diagnostics, streamline operations, and improve patient care. This article explores the adoption of AI-powered cloud solutions in dental clinics, focusing on infrastructure requirements, software licensing, staff training, system optimization, and the challenges faced during implementation. It provides a detailed guide for dental practices to transition to AI cloud systems. We reviewed existing literature, technological guidelines, and practical implementation strategies for integrating AI cloud in dental practices. The methodology includes a step-by-step approach to understanding clinic needs, selecting appropriate software, training staff, and ensuring system optimization and maintenance. Integrating AI cloud solutions can drastically improve clinical outcomes and operational efficiency. Despite the challenges, proper planning, infrastructure investment, and continuous training can ensure a smooth transition and maximize the benefits of AI technologies in dental care.

RevDate: 2025-03-10

Alshardan A, Mahgoub H, Alahmari S, et al (2025)

Cloud-to-Thing continuum-based sports monitoring system using machine learning and deep learning model.

PeerJ. Computer science, 11:e2539.

Sports monitoring and analysis have seen significant advancements by integrating cloud computing and continuum paradigms facilitated by machine learning and deep learning techniques. This study presents a novel approach for sports monitoring, specifically focusing on basketball, that seamlessly transitions from traditional cloud-based architectures to a continuum paradigm, enabling real-time analysis and insights into player performance and team dynamics. Leveraging machine learning and deep learning algorithms, our framework offers enhanced capabilities for player tracking, action recognition, and performance evaluation in various sports scenarios. The proposed Cloud-to-Thing continuum-based sports monitoring system utilizes advanced techniques such as Improved Mask R-CNN for pose estimation and a hybrid metaheuristic algorithm combined with a generative adversarial network (GAN) for classification. Our system significantly improves latency and accuracy, reducing latency to 5.1 ms and achieving an accuracy of 94.25%, which outperforms existing methods in the literature. These results highlight the system's ability to provide real-time, precise, and scalable sports monitoring, enabling immediate feedback for time-sensitive applications. This research has significantly improved real-time sports event analysis, contributing to improved player performance evaluation, enhanced team strategies, and informed tactical adjustments.

RevDate: 2025-03-10

Rajagopal D, PKT Subramanian (2025)

AI augmented edge and fog computing for Internet of Health Things (IoHT).

PeerJ. Computer science, 11:e2431.

Patients today seek a more advanced and personalized health-care system that keeps up with the pace of modern living. Cloud computing delivers resources over the Internet and enables the deployment of an infinite number of applications to provide services to many sectors. The primary limitation of these cloud frameworks right now is their limited scalability, which results in their inability to meet needs. An edge/fog computing environment, paired with current computing techniques, is the answer to fulfill the energy efficiency and latency requirements for the real-time collection and analysis of health data. Additionally, the Internet of Things (IoT) revolution has been essential in changing contemporary healthcare systems by integrating social, economic, and technological perspectives. This requires transitioning from unadventurous healthcare systems to more adapted healthcare systems that allow patients to be identified, managed, and evaluated more easily. These techniques allow data from many sources to be integrated to effectively assess patient health status and predict potential preventive actions. A subset of the Internet of Things, the Internet of Health Things (IoHT) enables the remote exchange of data for physical processes like patient monitoring, treatment progress, observation, and consultation. Previous surveys related to healthcare mainly focused on architecture and networking, which left untouched important aspects of smart systems like optimal computing techniques such as artificial intelligence, deep learning, advanced technologies, and services that includes 5G and unified communication as a service (UCaaS). This study aims to examine future and existing fog and edge computing architectures and methods that have been augmented with artificial intelligence (AI) for use in healthcare applications, as well as defining the demands and challenges of incorporating fog and edge computing technology in IoHT, thereby helping healthcare professionals and technicians identify the relevant technologies required based on their need for developing IoHT frameworks for remote healthcare. Among the crucial elements to take into account in an IoHT framework are efficient resource management, low latency, and strong security. This review addresses several machine learning techniques for efficient resource management in the IoT, where machine learning (ML) and AI are crucial. It has been noted how the use of modern technologies, such as narrow band-IoT (NB-IoT) for wider coverage and Blockchain technology for security, is transforming IoHT. The last part of the review focuses on the future challenges posed by advanced technologies and services. This study provides prospective research suggestions for enhancing edge and fog computing services for healthcare with modern technologies in order to give patients with an improved quality of life.

RevDate: 2025-03-07
CmpDate: 2025-03-07

Parciak M, Pierlet N, LM Peeters (2025)

Empowering Health Care Actors to Contribute to the Implementation of Health Data Integration Platforms: Retrospective of the medEmotion Project.

Journal of medical Internet research, 27:e68083 pii:v27i1e68083.

Health data integration platforms are vital to drive collaborative, interdisciplinary medical research projects. Developing such a platform requires input from different stakeholders. Managing these stakeholders and steering platform development is challenging, and misaligning the platform to the partners' strategies might lead to a low acceptance of the final platform. We present the medEmotion project, a collaborative effort among 7 partners from health care, academia, and industry to develop a health data integration platform for the region of Limburg in Belgium. We focus on the development process and stakeholder engagement, aiming to give practical advice for similar future efforts based on our reflections on medEmotion. We introduce Personas to paraphrase different roles that stakeholders take and Demonstrators that summarize personas' requirements with respect to the platform. Both the personas and the demonstrators serve 2 purposes. First, they are used to define technical requirements for the medEmotion platform. Second, they represent a communication vehicle that simplifies discussions among all stakeholders. Based on the personas and demonstrators, we present the medEmotion platform based on components from the Microsoft Azure cloud. The demonstrators are based on real-world use cases and showcase the utility of the platform. We reflect on the development process of medEmotion and distill takeaway messages that will be helpful for future projects. Investing in community building, stakeholder engagement, and education is vital to building an ecosystem for a health data integration platform. Instead of academic-led projects, the health care providers themselves ideally drive collaboration among health care providers. The providers are best positioned to address hospital-specific requirements, while academics take a neutral mediator role. This also includes the ideation phase, where it is vital to ensure the involvement of all stakeholders. Finally, balancing innovation with implementation is key to developing an innovative yet sustainable health data integration platform.

RevDate: 2025-03-06

Lee H, Kim W, Kwon N, et al (2025)

Lessons from national biobank projects utilizing whole-genome sequencing for population-scale genomics.

Genomics & informatics, 23(1):8.

Large-scale national biobank projects utilizing whole-genome sequencing have emerged as transformative resources for understanding human genetic variation and its relationship to health and disease. These initiatives, which include the UK Biobank, All of Us Research Program, Singapore's PRECISE, Biobank Japan, and the National Project of Bio-Big Data of Korea, are generating unprecedented volumes of high-resolution genomic data integrated with comprehensive phenotypic, environmental, and clinical information. This review examines the methodologies, contributions, and challenges of major WGS-based national genome projects worldwide. We first discuss the landscape of national biobank initiatives, highlighting their distinct approaches to data collection, participant recruitment, and phenotype characterization. We then introduce recent technological advances that enable efficient processing and analysis of large-scale WGS data, including improvements in variant calling algorithms, innovative methods for creating multi-sample VCFs, optimized data storage formats, and cloud-based computing solutions. The review synthesizes key discoveries from these projects, particularly in identifying expression quantitative trait loci and rare variants associated with complex diseases. Our review introduces the latest findings from the National Project of Bio-Big Data of Korea, which has advanced our understanding of population-specific genetic variation and rare diseases in Korean and East Asian populations. Finally, we discuss future directions and challenges in maximizing the impact of these resources on precision medicine and global health equity. This comprehensive examination demonstrates how large-scale national genome projects are revolutionizing genetic research and healthcare delivery while highlighting the importance of continued investment in diverse, population-specific genomic resources.

RevDate: 2025-03-06

Zhang G (2025)

Cloud computing convergence: integrating computer applications and information management for enhanced efficiency.

Frontiers in big data, 8:1508087.

This study examines the transformative impact of cloud computing on the integration of computer applications and information management systems to improve operational efficiency. Grounded in a robust methodological framework, the research employs experimental testing and comparative data analysis to assess the performance of an information management system within a cloud computing environment. Data was meticulously collected and analyzed, highlighting a threshold where user demand surpasses 400, leading to a stabilization in CPU utilization at an optimal level and maintaining subsystem response times consistently below 5 s. This comprehensive evaluation underscores the significant advantages of cloud computing, demonstrating its capacity to optimize the synergy between computer applications and information management. The findings not only contribute to theoretical advancements in the field but also offer actionable insights for organizations seeking to enhance efficiency through effective cloud-based solutions.

RevDate: 2025-03-06

Saeedbakhsh S, Mohammadi M, Younesi S, et al (2025)

Using Internet of Things for Child Care: A Systematic Review.

International journal of preventive medicine, 16:3.

BACKGROUND: In smart cities, prioritizing child safety through affordable technology like the Internet of Things (IoT) is crucial for parents. This study seeks to investigate different IoT tools that can prevent and address accidents involving children. The goal is to alleviate the emotional and financial toll of such incidents due to their high mortality rates.

METHODS: This study considers articles published in English that use IoT for children's healthcare. PubMed, Science Direct, and Web of Science databases are considered as searchable databases. 273 studies were retrieved after the initial search. After eliminating duplicate records, studies were assessed based on input and output criteria. Titles and abstracts were reviewed for relevance. Articles not meeting criteria were excluded. Finally, 29 cases had the necessary criteria to enter this study.

RESULTS: The study reveals that India is at the forefront of IoT research for children, followed by Italy and China. Studies mainly occur indoors, utilizing wearable sensors like heart rate, motion, and tracking sensors. Biosignal sensors and technologies such as Zigbee and image recognition are commonly used for data collection and analysis. Diverse approaches, including cloud computing and machine vision, are applied in this innovative field.

CONCLUSIONS: In conclusion, IoT for children is mainly seen in developed countries like India, Italy, and China. Studies focus on indoor use, using wearable sensors for heart rate monitoring. Biosignal sensors and various technologies like Zigbee, Kinect, image recognition, RFID, and robots contribute to enhancing children's well-being.

RevDate: 2025-03-05

Efendi A, Ammarullah MI, Isa IGT, et al (2025)

IoT-Based Elderly Health Monitoring System Using Firebase Cloud Computing.

Health science reports, 8(3):e70498 pii:HSR270498.

BACKGROUND AND AIMS: The increasing elderly population presents significant challenges for healthcare systems, necessitating innovative solutions for continuous health monitoring. This study develops and validates an IoT-based elderly monitoring system designed to enhance the quality of life for elderly people. The system features a robust Android-based user interface integrated with the Firebase cloud platform, ensuring real-time data collection and analysis. In addition, a supervised machine learning technology is implemented to conduct prediction task of the observed user whether in "stable" or "not stable" condition based on real-time parameter.

METHODS: The system architecture adopts the IoT layer including physical layer, network layer, and application layer. Device validation is conducted by involving six participants to measure the real-time data of heart-rate, oxygen saturation, and body temperature, then analysed by mean average percentage error (MAPE) to define error rate. A comparative experiment is conducted to define the optimal supervised machine learning model to be deployed into the system by analysing evaluation metrics. Meanwhile, the user satisfaction aspect evaluated by the terms of usability, comfort, security, and effectiveness.

RESULTS: IoT-based elderly health monitoring system has been constructed with a MAPE of 0.90% across the parameters: heart-rate (1.68%), oxygen saturation (0.57%), and body temperature (0.44%). In machine learning experiment indicates XGBoost model has the optimal performance based on the evaluation metrics of accuracy and F1 score which generates 0.973 and 0.970, respectively. In user satisfaction aspect based on usability, comfort, security, and effectiveness achieving a high rating of 86.55%.

CONCLUSION: This system offers practical applications for both elderly users and caregivers, enabling real-time monitoring of health conditions. Future enhancements may include integration with artificial intelligence technologies such as machine learning and deep learning to predict health conditions from data patterns, further improving the system's capabilities and effectiveness in elderly care.

RevDate: 2025-03-05
CmpDate: 2025-03-05

Duan S, Yong R, Yuan H, et al (2024)

Automated Offline Smartphone-Assisted Microfluidic Paper-Based Analytical Device for Biomarker Detection of Alzheimer's Disease.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2024:1-5.

This paper presents a smartphone-assisted microfluidic paper-based analytical device (μPAD), which was applied to detect Alzheimer's disease biomarkers, especially in resource-limited regions. This device implements deep learning (DL)-assisted offline smartphone detection, eliminating the requirement for large computing devices and cloud computing power. In addition, a smartphone-controlled rotary valve enables a fully automated colorimetric enzyme-linked immunosorbent assay (c-ELISA) on μPADs. It reduces detection errors caused by human operation and further increases the accuracy of μPAD c-ELISA. We realized a sandwich c-ELISA targeting β-amyloid peptide 1-42 (Aβ 1-42) in artificial plasma, and our device provided a detection limit of 15.07 pg/mL. We collected 750 images for the training of the DL YOLOv5 model. The training accuracy is 88.5%, which is 11.83% higher than the traditional curve-fitting result analysis method. Utilizing the YOLOv5 model with the NCNN framework facilitated offline detection directly on the smartphone. Furthermore, we developed a smartphone application to operate the experimental process, realizing user-friendly rapid sample detection.

RevDate: 2025-03-05
CmpDate: 2025-03-05

Delannes-Molka D, Jackson KL, King E, et al (2024)

Towards Markerless Motion Estimation of Human Functional Upper Extremity Movement.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2024:1-7.

Markerless motion capture of human movement is a potentially useful approach for providing movement scientists and rehabilitation specialists with a portable and low-cost method for measuring functional upper extremity movement. This is in contrast with optical and inertial motion capture systems, which often require specialized equipment and expertise to use. Existing methods for markerless motion capture have focused on inferring 2D or 3D keypoints on the body and estimating volumetric representations, both using RGB-D. The keypoints and volumes are then used to compute quantities like joint angles and velocity magnitude over time. However, these methods do not have sufficient accuracy to capture fine human motions and, as a result, have largely been restricted to capturing gross movements and rehabilitation games. Furthermore, most of these methods have not used depth images to estimate motion directly. This work proposes using the depth images from an RGB-D camera to compute the upper extremity motion directly by segmenting the upper extremity into components of a kinematic chain, estimating the motion of the rigid portions (i.e., the upper and lower arm) using ICP or Distance Transform across sequential frames, and computing the motion of the end-effector (e.g., wrist) relative to the torso. Methods with data from both the Microsoft Azure Kinect Camera and 9-camera OptiTrack Motive motion capture system (Mocap) were compared. Point Cloud methods performed comparably to Mocap on tracking rotation and velocity of a human arm and could be an affordable alternative to Mocap in the future. While the methods were tested on gross motions, future works would include refining and evaluating these methods for fine motion.

RevDate: 2025-03-04
CmpDate: 2025-03-05

Alshemaimri B, Badshah A, Daud A, et al (2025)

Regional computing approach for educational big data.

Scientific reports, 15(1):7619.

The educational landscape is witnessing a transformation with the integration of Educational Technology (Edutech). As educational institutions adopt digital platforms and tools, the generation of Educational Big Data (EBD) has significantly increased. Research indicates that educational institutions produce massive data, including student enrollment records, academic performance metrics, attendance records, learning activities, and interactions within digital learning environments. This influx of data needs efficient processing to derive actionable insights and enhance the learning experience. Real-time data processing has a critical part in educational environments to support various functions such as personalized learning, adaptive assessment, and administrative decision-making. However, there may be challenges in sending large amounts of educational data to cloud servers, i.e., latency, cost and network congestion. These challenges make it more difficult to provide educators and students with timely insights and services, which reduces the efficiency of educational activities. This paper proposes a Regional Computing (RC) paradigm designed specifically for big data management in education to address these issues. In this case, RC is established within educational regions and intended to decentralize data processing. To reduce dependency on cloud infrastructure, these regional servers are strategically located to collect, process, and store big data related to education regionally. Our investigation results show that RC significantly reduces latency to 203.11 ms for 2,000 devices, compared to 707.1 ms in Cloud Computing (CC). It is also more cost-efficient, with a total cost of just 1.14 USD versus 5.36 USD in the cloud. Furthermore, it avoids the 600% congestion surges seen in cloud setups and maintains consistent throughput under high workloads, establishing RC as the optimal solution for managing EBD.

RevDate: 2025-03-03

Verdet A, Hamdaqa M, Silva LD, et al (2025)

Assessing the adoption of security policies by developers in terraform across different cloud providers.

Empirical software engineering, 30(3):74.

Cloud computing has become popular thanks to the widespread use of Infrastructure as Code (IaC) tools, allowing the community to manage and configure cloud infrastructure using scripts. However, the scripting process does not automatically prevent practitioners from introducing misconfigurations, vulnerabilities, or privacy risks. As a result, ensuring security relies on practitioners' understanding and the adoption of explicit policies. To understand how practitioners deal with this problem, we perform an empirical study analyzing the adoption of scripted security best practices present in Terraform files, applied on AWS, Azure, and Google Cloud. We assess the adoption of these practices by analyzing a sample of 812 open-source GitHub projects. We scan each project's configuration files, looking for policy implementation through static analysis (Checkov and Tfsec). The category Access policy emerges as the most widely adopted in all providers, while Encryption at rest presents the most neglected policies. Regarding the cloud providers, we observe that AWS and Azure present similar behavior regarding attended and neglected policies. Finally, we provide guidelines for cloud practitioners to limit infrastructure vulnerability and discuss further aspects associated with policies that have yet to be extensively embraced within the industry.

RevDate: 2025-03-02

Zhang A, Tariq A, Quddoos A, et al (2025)

Spatio-temporal analysis of urban expansion and land use dynamics using google earth engine and predictive models.

Scientific reports, 15(1):6993.

Urban expansion and changes in land use/land cover (LULC) have intensified in recent decades due to human activity, influencing ecological and developmental landscapes. This study investigated historical and projected LULC changes and urban growth patterns in the districts of Multan and Sargodha, Pakistan, using Landsat satellite imagery, cloud computing, and predictive modelling from 1990 to 2030. The analysis of satellite images was grouped into four time periods (1990-2000, 2000-2010, 2010-2020, and 2020-2030). The Google Earth Engine cloud-based platform facilitated the classification of Landsat 5 ETM (1990, 2000, and 2010) and Landsat 8 OLI (2020) images using the Random Forest model. A simulation model integrating Cellular Automata and an Artificial Neural Network Multilayer Perceptron in the MOLUSCE plugin of QGIS was employed to forecast urban growth to 2030. The resulting maps showed consistently high accuracy levels exceeding 92% for both districts across all time periods. The analysis revealed that Multan's built-up area increased from 240.56 km[2] (6.58%) in 1990 to 440.30 km[2] (12.04%) in 2020, while Sargodha experienced more dramatic growth from 730.91 km[2] (12.69%) to 1,029.07 km[2] (17.83%). Vegetation cover remained dominant but showed significant variations, particularly in peri-urban areas. By 2030, Multan's urban area is projected to stabilize at 433.22 km[2], primarily expanding in the southeastern direction. Sargodha is expected to reach 1,404.97 km[2], showing more balanced multi-directional growth toward the northeast and north. The study presents an effective analytical method integrating cloud processing, GIS, and change simulation modeling to evaluate urban growth spatiotemporal patterns and LULC changes. This approach successfully identified the main LULC transformations and trends in the study areas while highlighting potential urbanization zones where opportunities exist for developing planned and managed urban settlements.

RevDate: 2025-02-27

Xiang Z, Ying F, Xue X, et al (2025)

Unmanned-Aerial-Vehicle Trajectory Planning for Reliable Edge Data Collection in Complex Environments.

Biomimetics (Basel, Switzerland), 10(2):.

With the rapid advancement of edge-computing technology, more computing tasks are moving from traditional cloud platforms to edge nodes. This shift imposes challenges on efficiently handling the substantial data generated at the edge, especially in extreme scenarios, where conventional data collection methods face limitations. UAVs have emerged as a promising solution for overcoming these challenges by facilitating data collection and transmission in various environments. However, existing UAV trajectory optimization algorithms often overlook the critical factor of the battery capacity, leading to potential mission failures or safety risks. In this paper, we propose a trajectory planning approach Hyperion that incorporates charging considerations and employs a greedy strategy for decision-making to optimize the trajectory length and energy consumption. By ensuring the UAV's ability to return to the charging station after data collection, our method enhances task reliability and UAV adaptability in complex environments.

RevDate: 2025-02-27

Huba M, Bistak P, Skrinarova J, et al (2025)

Performance Portrait Method: Robust Design of Predictive Integral Controller.

Biomimetics (Basel, Switzerland), 10(2):.

The performance portrait method (PPM) can be characterized as a systematized digitalized version of the trial and error method-probably the most popular and very often used method of engineering work. Its digitization required the expansion of performance measures used to evaluate the step responses of dynamic systems. Based on process modeling, PPM also contributed to the classification of models describing linear and non-linear dynamic processes so that they approximate their dynamics using the smallest possible number of numerical parameters. From most bio-inspired procedures of artificial intelligence and optimization used for the design of automatic controllers, PPM is distinguished by the possibility of repeated application of once generated performance portraits (PPs). These represent information about the process obtained by evaluating the performance of setpoint and disturbance step responses for all relevant values of the determining loop parameters organized into a grid. It can be supported by the implementation of parallel calculations with optimized decomposition in the high-performance computing (HPC) cloud. The wide applicability of PPM ranges from verification of analytically calculated optimal settings achieved by various approaches to controller design, to the analysis as well as optimal and robust setting of controllers for processes where other known control design methods fail. One such situation is illustrated by an example of predictive integrating (PrI) controller design for processes with a dominant time-delayed sensor dynamics, representing a counterpart of proportional-integrating (PI) controllers, the most frequently used solutions in practice. PrI controllers can be considered as a generalization of the disturbance-response feedback-the oldest known method for the design of dead-time compensators by Reswick. In applications with dominant dead-time and loop time constants located in the feedback (sensors), as those, e.g., met in magnetoencephalography (MEG), it makes it possible to significantly improve the control performance. PPM shows that, despite the absence of effective analytical control design methods for such situations, it is possible to obtain high-quality optimal solutions for processes that require working with uncertain models specified by interval parameters, while achieving invariance to changes in uncertain parameters.

RevDate: 2025-02-26

He J, Sui D, Li L, et al (2025)

Fueling the development of elderly care services in China with digital technology: A provincial panel data analysis.

Heliyon, 11(3):e41490.

BACKGROUND: The global demographic shift towards an aging population presents significant challenges to elderly care cervices, which encompass the range of services designed to meet the health and social needs of older adults. Particularly in China, the aging society's diverse needs are often met with service inadequacies and inefficient resource allocation within the elderly care cervices framework.

OBJECTIVE: This study aims to investigate the transformative potential of digital technology, which includes innovations such as e-commerce, cloud computing, and artificial intelligence, on elderly care cervices in China. The objective is to assess the impact of digital technology on service quality, resource allocation, and operational efficiency within the elderly care cervices domain.

METHODS: Employing Stata software, the study conducts an analysis of panel data from 30 Chinese provinces over the period from 2014 to 2021, examining the integration and application of digital technology within elderly care cervices to identify trends and correlations.

RESULTS: The findings reveal that the integration of digital technology significantly enhances elderly care cervices, improving resource allocation and personalizing care, which in turn boosts the quality of life for the elderly. Specifically, a one-percentage point increase in the development and adoption of digital technology within elderly care cervices is associated with a 21.5 percentage point increase in care quality.

CONCLUSION: This research underscores the pivotal role of digital technology in revolutionizing elderly care cervices. The findings offer a strategic guide for policymakers and stakeholders to effectively harness digital technology, addressing the challenges posed by an aging society and enhancing the efficiency and accessibility of elderly care cervices in China. The application of digital technology in elderly care cervices is set to become a cornerstone in the future of elderly care, ensuring that the needs of the aging population are met with innovative and compassionate solutions.

RevDate: 2025-02-26

Awasthi C, Awasthi SP, PK Mishra (2024)

Secure and Reliable Fog-Enabled Architecture Using Blockchain With Functional Biased Elliptic Curve Cryptography Algorithm for Healthcare Services.

Blockchain in healthcare today, 7:.

Fog computing (FC) is an emerging technology that extends the capability and efficiency of cloud computing networks by acting as a bridge among the cloud and the device. Fog devices can process an enormous volume of information locally, are transportable, and can be deployed on a variety of systems. Because of its real-time processing and event reactions, it is ideal for healthcare. With such a wide range of characteristics, new security and privacy concerns arise. Due to the safe transmission, arrival, and access, as well as the availability of medical devices, security creates new issues in the area of healthcare. As an outcome, FC necessitates a unique approach to security and privacy metrics, as opposed to standard cloud computing methods. Hence, this article suggests an effective blockchain depending on secure healthcare services in FC. Here, the fog nodes gather the information from the medical sensor device and the data are validated using smart contracts in the blockchain network. We propose a functional biased elliptic curve cryptography algorithm to encrypt the data. The optimization is performed using the galactic bee colony optimization algorithm to enhance the procedure of encryption. The performance of the suggested methodology is assessed and contrasted with the traditional techniques. It is proved that the combination of FC with blockchain has increased the security of data transmission in healthcare services.

RevDate: 2025-03-05

Jin J, Li B, Wang X, et al (2025)

PennPRS: a centralized cloud computing platform for efficient polygenic risk score training in precision medicine.

medRxiv : the preprint server for health sciences.

Polygenic risk scores (PRS) are becoming increasingly vital for risk prediction and stratification in precision medicine. However, PRS model training presents significant challenges for broader adoption of PRS, including limited access to computational resources, difficulties in implementing advanced PRS methods, and availability and privacy concerns over individual-level genetic data. Cloud computing provides a promising solution with centralized computing and data resources. Here we introduce PennPRS (https://pennprs.org), a scalable cloud computing platform for online PRS model training in precision medicine. We developed novel pseudo-training algorithms for multiple PRS methods and ensemble approaches, enabling model training without requiring individual-level data. These methods were rigorously validated through extensive simulations and large-scale real data analyses involving over 6,000 phenotypes across various data sources. PennPRS supports online single- and multi-ancestry PRS training with seven methods, allowing users to upload their own data or query from more than 27,000 datasets in the GWAS Catalog, submit jobs, and download trained PRS models. Additionally, we applied our pseudo-training pipeline to train PRS models for over 8,000 phenotypes and made their PRS weights publicly accessible. In summary, PennPRS provides a novel cloud computing solution to improve the accessibility of PRS applications and reduce disparities in computational resources for the global PRS research community.

RevDate: 2025-02-21

Wolski M, Woloszynski T, Stachowiak G, et al (2025)

Bone Data Lake: A storage platform for bone texture analysis.

Proceedings of the Institution of Mechanical Engineers. Part H, Journal of engineering in medicine [Epub ahead of print].

Trabecular bone (TB) texture regions selected on hand and knee X-ray images can be used to detect and predict osteoarthritis (OA). However, the analysis has been impeded by increasing data volume and diversification of data formats. To address this problem, a novel storage platform, called Bone Data Lake (BDL) is proposed for the collection and retention of large numbers of images, TB texture regions and parameters, regardless of their structure, size and source. BDL consists of three components, i.e.: a raw data storage, a processed data storage, and a data reference system. The performance of the BDL was evaluated using 20,000 knee and hand X-ray images of various formats (DICOM, PNG, JPEG, BMP, and compressed TIFF) and sizes (from 0.3 to 66.7 MB). The images were uploaded into BDL and automatically converted into a standardized 8-bit grayscale uncompressed TIFF format. TB regions of interest were then selected on the standardized images, and a data catalog containing metadata information about the regions was constructed. Next, TB texture parameters were calculated for the regions using Variance Orientation Transform (VOT) and Augmented VOT (AVOT) methods and stored in XLSX files. The files were uploaded into BDL, and then transformed into CSV files and cataloged. Results showed that the BDL efficiently transforms images and catalogs bone regions and texture parameters. BDL can serve as the foundation of a reliable, secure and collaborative system for OA detection and prediction based on radiographs and TB texture.

RevDate: 2025-02-23

Shahid U, Kanwal S, Bano M, et al (2025)

Blockchain driven medical image encryption employing chaotic tent map in cloud computing.

Scientific reports, 15(1):6236.

Data security during transmission over public networks has become a key concern in an era of rapid digitization. Image data is especially vulnerable since it can be stored or transferred using public cloud services, making it open to illegal access, breaches, and eavesdropping. This work suggests a novel way to integrate blockchain technology with a Chaotic Tent map encryption scheme in order to overcome these issues. The outcome is a Blockchain driven Chaotic Tent Map Encryption Scheme (BCTMES) for secure picture transactions. The idea behind this strategy is to ensure an extra degree of security by fusing the distributed and immutable properties of blockchain technology with the intricate encryption offered by chaotic maps. To ensure that the image is transformed into a cipher form that is resistant to several types of attacks, the proposed BCTMES first encrypts it using the Chaotic Tent map encryption technique. The accompanying signed document is safely kept on the blockchain, and this encrypted image is subsequently uploaded to the cloud. The integrity and authenticity of the image are confirmed upon retrieval by utilizing blockchain's consensus mechanism, adding another layer of security against manipulation. Comprehensive performance evaluations show that BCTMES provides notable enhancements in important security parameters, such as entropy, correlation coefficient, key sensitivity, peak signal-to-noise ratio (PSNR), unified average changing intensity (UACI), and number of pixels change rate (NPCR). In addition to providing good defense against brute-force attacks, the high key size of [Formula: see text] further strengthens the system's resilience. To sum up, the BCTMES effectively addresses a number of prevalent risks to picture security and offers a complete solution that may be implemented in cloud-based settings where data integrity and privacy are crucial. This work suggests a promising path for further investigation and practical uses in secure image transmission.

RevDate: 2025-02-22

Quevedo D, Do K, Delic G, et al (2025)

GPU Implementation of a Gas-Phase Chemistry Solver in the CMAQ Chemical Transport Model.

ACS ES&T air, 2(2):226-235.

The Community Multiscale Air Quality (CMAQ) model simulates atmospheric phenomena, including advection, diffusion, gas-phase chemistry, aerosol physics and chemistry, and cloud processes. Gas-phase chemistry is often a major computational bottleneck due to its representation as large systems of coupled nonlinear stiff differential equations. We leverage the parallel computational performance of graphics processing unit (GPU) hardware to accelerate the numerical integration of these systems in CMAQ's CHEM module. Our implementation, dubbed CMAQ-CUDA, in reference to its use in the Compute Unified Device Architecture (CUDA) general purpose GPU (GPGPU) computing solution, migrates CMAQ's Rosenbrock solver from Fortran to CUDA Fortran. CMAQ-CUDA accelerates the Rosenbrock solver such that simulations using the chemical mechanisms RACM2, CB6R5, and SAPRC07 require only 51%, 50%, or 35% as much time, respectively, as CMAQv5.4 to complete a chemistry time step. Our results demonstrate that CMAQ is amenable to GPU acceleration and highlight a novel Rosenbrock solver implementation for reducing the computational burden imposed by the CHEM module.

RevDate: 2025-02-20

Wu S, Bin G, Shi W, et al (2024)

Empowering diabetic foot ulcer prevention: A novel cloud-based plantar pressure monitoring system for enhanced self-care.

Technology and health care : official journal of the European Society for Engineering and Medicine [Epub ahead of print].

BACKGROUND: This study was prompted by the crucial impact of abnormal plantar pressure on diabetic foot ulcer development and the notable lack of its monitoring in daily life. Our research introduces a cloud-based, user-friendly plantar pressure monitoring system designed for seamless integration into daily routines.

OBJECTIVE: This innovative system aims to enable early ulcer prediction and proactive prevention, thereby substantially improving diabetic foot care through enhanced self-care and timely intervention.

METHODS: A novel, user-centric plantar pressure monitoring system was developed, integrating a wearable device, mobile application, and cloud computing for instantaneous diabetic foot care. This configuration facilitates comprehensive monitoring at 64 underfoot points. It encourages user engagement in health management. The system wirelessly transmits data to the cloud, where insights are processed and made available on the app, fostering proactive self-care through immediate feedback. Tailored for daily use, our system streamlines home monitoring, enhancing early ulcer detection and preventative measures.

RESULTS: A feasibility study validated our system's accuracy, demonstrating a relative error of approximately 4% compared to a commercial pressure sensing walkway. This precision affirms the system's efficacy for home-based monitoring and its potential in diabetic foot ulcer prevention, positioning it as a viable instrument for self-managed care.

CONCLUSIONS: The system dynamically captures and analyzes plantar pressure distribution and gait cycle details, highlighting its utility in early diabetic foot ulcer detection and management. Offering real-time, actionable data, it stands as a critical tool for individuals to actively participate in their foot health care, epitomizing the essence of self-managed healthcare practices.

RevDate: 2025-02-20

Balamurugan M, Narayanan K, Raghu N, et al (2025)

Role of artificial intelligence in smart grid - a mini review.

Frontiers in artificial intelligence, 8:1551661.

A smart grid is a structure that regulates, operates, and utilizes energy sources that are incorporated into the smart grid using smart communications techniques and computerized techniques. The running and maintenance of Smart Grids now depend on artificial intelligence methods quite extensively. Artificial intelligence is enabling more dependable, efficient, and sustainable energy systems from improving load forecasting accuracy to optimizing power distribution and guaranteeing issue identification. An intelligent smart grid will be created by substituting artificial intelligence for manual tasks and achieving high efficiency, dependability, and affordability across the energy supply chain from production to consumption. Collection of a large diversity of data is vital to make effective decisions. Artificial intelligence application operates by processing abundant data samples, advanced computing, and strong communication collaboration. The development of appropriate infrastructure resources, including big data, cloud computing, and other collaboration platforms, must be enhanced for this type of operation. In this paper, an attempt has been made to summarize the artificial intelligence techniques used in various aspects of smart grid system.

RevDate: 2025-02-22

Zan T, Jia X, Guo X, et al (2025)

Research on variable-length control chart pattern recognition based on sliding window method and SECNN-BiLSTM.

Scientific reports, 15(1):5921.

Control charts, as essential tools in Statistical Process Control (SPC), are frequently used to analyze whether production processes are under control. Most existing control chart recognition methods target fixed-length data, failing to meet the needs of recognizing variable-length control charts in production. This paper proposes a variable-length control chart recognition method based on Sliding Window Method and SE-attention CNN and Bi-LSTM (SECNN-BiLSTM). A cloud-edge integrated recognition system was developed using wireless digital calipers, embedded devices, and cloud computing. Different length control chart data is transformed from one-dimensional to two-dimensional matrices using a sliding window approach and then fed into a deep learning network combining SE-attention CNN and Bi-LSTM. This network, inspired by residual structures, extracts multiple features to build a control chart recognition model. Simulations, the cloud-edge recognition system, and engineering applications demonstrate that this method efficiently and accurately recognizes variable-length control charts, establishing a foundation for more efficient pattern recognition.

RevDate: 2025-02-26
CmpDate: 2025-02-26

Pricope NG, EG Dalton (2025)

Mapping coastal resilience: Precision insights for green infrastructure suitability.

Journal of environmental management, 376:124511.

Addressing the need for effective flood risk mitigation strategies and enhanced urban resilience to climate change, we introduce a cloud-computed Green Infrastructure Suitability Index (GISI) methodology. This approach combines remote sensing and geospatial modeling to create a cloud-computed blend that synthesizes land cover classifications, biophysical variables, and flood exposure data to map suitability for green infrastructure (GI) implementation at both street and landscape levels. The GISI methodology provides a flexible and robust tool for urban planning, capable of accommodating diverse data inputs and adjustments, making it suitable for various geographic contexts. Applied within the Wilmington Urban Area Metropolitan Planning Organization (WMPO) in North Carolina, USA, our findings show that residential parcels, constituting approximately 91% of the total identified suitable areas, are optimally positioned for GI integration. This underscores the potential for embedding GI within developed residential urban landscapes to bolster ecosystem and community resilience. Our analysis indicates that 7.19% of the WMPO area is highly suitable for street-level GI applications, while 1.88% is ideal for landscape GI interventions, offering opportunities to enhance stormwater management and biodiversity at larger and more connected spatial scales. By identifying specific parcels with high suitability for GI, this research provides a comprehensive and transferable, data-driven foundation for local and regional planning efforts. The scalability and adaptability of the proposed modeling approach make it a powerful tool for informing sustainable urban development practices. Future work will focus on more spatially-resolved models of these areas and the exploration of GI's multifaceted benefits at the local level, aiming to guide the deployment of GI projects that align with broader environmental and social objectives.

RevDate: 2025-02-22
CmpDate: 2025-02-18

Bathelt F, Lorenz S, Weidner J, et al (2025)

Application of Modular Architectures in the Medical Domain - a Scoping Review.

Journal of medical systems, 49(1):27.

The healthcare sector is notable for its reliance on discrete, self-contained information systems, which are often characterised by the presence of disparate data silos. The growing demands for documentation, quality assurance, and secondary use of medical data for research purposes has underscored the necessity for solutions that are more flexible, straightforward to maintain and interoperable. In this context, modular systems have the potential to act as a catalyst for change, offering the capacity to encapsulate and combine functionalities in an adaptable manner. The objective of this scoping review is to determine the extent to which modular systems are employed in the medical field. The review will provide a detailed overview of the effectiveness of service-oriented or microservice architectures, the challenges that should be addressed during implementation, and the lessons that can be learned from countries with productive use of such modular architectures. The review shows a rise in the use of microservices, indicating a shift towards encapsulated autonomous functions. The implementation should use HL7 FHIR as communication standard, deploy RESTful interfaces and standard protocols for technical data exchange, and apply HIPAA security rule for security purposes. User involvement is essential, as is integrating services into existing workflows. Modular architectures can facilitate flexibility and scalability. However, there are well-documented performance issues associated with microservice architectures, namely a high communication demand. One potential solution to this problem may be to integrate modular architectures into a cloud computing environment, which would require further investigation.

RevDate: 2025-02-19

Kelliher JM, Xu Y, Flynn MC, et al (2024)

Standardized and accessible multi-omics bioinformatics workflows through the NMDC EDGE resource.

Computational and structural biotechnology journal, 23:3575-3583.

Accessible and easy-to-use standardized bioinformatics workflows are necessary to advance microbiome research from observational studies to large-scale, data-driven approaches. Standardized multi-omics data enables comparative studies, data reuse, and applications of machine learning to model biological processes. To advance broad accessibility of standardized multi-omics bioinformatics workflows, the National Microbiome Data Collaborative (NMDC) has developed the Empowering the Development of Genomics Expertise (NMDC EDGE) resource, a user-friendly, open-source web application (https://nmdc-edge.org). Here, we describe the design and main functionality of the NMDC EDGE resource for processing metagenome, metatranscriptome, natural organic matter, and metaproteome data. The architecture relies on three main layers (web application, orchestration, and execution) to ensure flexibility and expansion to future workflows. The orchestration and execution layers leverage best practices in software containers and accommodate high-performance computing and cloud computing services. Further, we have adopted a robust user research process to collect feedback for continuous improvement of the resource. NMDC EDGE provides an accessible interface for researchers to process multi-omics microbiome data using production-quality workflows to facilitate improved data standardization and interoperability.

RevDate: 2025-02-17

Dinpajooh M, Hightower GL, Overstreet RE, et al (2025)

On the stability constants of metal-nitrate complexes in aqueous solutions.

Physical chemistry chemical physics : PCCP [Epub ahead of print].

Stability constants of simple reactions involving addition of the NO3[-] ion to hydrated metal complexes, [M(H2O)x][n+] are calculated with a computational workflow developed using cloud computing resources. The computational workflow performs conformational searches for metal complexes at both low and high levels of theories in conjunction with a continuum solvation model (CSM). The low-level theory is mainly used for the initial conformational searches, which are complemented with high-level density functional theory conformational searches in the CSM framework to determine the coordination chemistry relevant for stability constant calculations. In this regard, the lowest energy conformations are found to obtain the reaction free energies for the addition of one NO3[-] to [M(H2O)x][n+] complexes, where M represents Fe(II), Fe(III), Sr(II), Ce(III), Ce(IV), and U(VI), respectively. Structural analysis of hundreds of optimized geometries at high-level theory reveals that NO3[-] coordinates with Fe(II) and Fe(III) in either a monodentate or bidentate manner. Interestingly, the lowest-energy conformations of Fe(II) metal-nitrate complexes exhibit monodentate or bidentate coordination with a coordination number of 6 while the bidentate seven-coordinated Fe(II) metal-nitrate complexes are approximately 2 kcal mol[-1] higher in energy. Notably, for Fe(III) metal-nitrate complexes, the bidentate seven-coordinated configuration is more stable than the six-coordinated Fe(II) complexes (monodentate or bidentate) by a few thermal energy units. In contrast, Sr(II), Ce(III), Ce(IV), and U(VI) metal ions predominantly coordinate with NO3[-] in a bidentate manner, exhibiting typical coordination numbers of 7, 9, 9, and 5, respectively. Stability constants are accordingly calculated using linear free energy approaches to account for the systematic errors and good agreements are obtained between the calculated stability constants and the available experimental data.

RevDate: 2025-02-18

Thilakarathne NN, Abu Bakar MS, Abas PE, et al (2025)

Internet of things enabled smart agriculture: Current status, latest advancements, challenges and countermeasures.

Heliyon, 11(3):e42136.

It is no wonder that agriculture plays a vital role in the development of some countries when their economies rely on agricultural activities and the production of food for human survival. Owing to the ever-increasing world population, estimated at 7.9 billion in 2022, feeding this number of people has become a concern due to the current rate of agricultural food production subjected to various reasons. The advent of the Internet of Things (IoT) based technologies in the 21st century has led to the reshaping of every industry, including agriculture, and has paved the way for smart agriculture, with the technology used towards automating and controlling most aspects of traditional agriculture. Smart agriculture, interchangeably known as smart farming, utilizes IoT and related enabling technologies such as cloud computing, artificial intelligence, and big data in agriculture and offers the potential to enhance agricultural operations by automating and making intelligent decisions, resulting in increased efficiency and a better yield with minimum waste. Consequently, most governments are spending more money and offering incentives to switch from traditional to smart agriculture. Nonetheless, the COVID-19 global pandemic served as a catalyst for change in the agriculture industry, driving a shift toward greater reliance on technology over traditional labor for agricultural tasks. In this regard, this research aims to synthesize the current knowledge of smart agriculture, highlighting its current status, main components, latest application areas, advanced agricultural practices, hardware and software used, success stores, potential challenges, and countermeasures to them, and future trends, for the growth of the industry as well as a reference to future research.

RevDate: 2025-02-14

Wyman A, Z Zhang (2025)

A Tutorial on the Use of Artificial Intelligence Tools for Facial Emotion Recognition in R.

Multivariate behavioral research [Epub ahead of print].

Automated detection of facial emotions has been an interesting topic for multiple decades in social and behavioral research but is only possible very recently. In this tutorial, we review three popular artificial intelligence based emotion detection programs that are accessible to R programmers: Google Cloud Vision, Amazon Rekognition, and Py-Feat. We present their advantages, disadvantages, and provide sample code so that researchers can immediately begin designing, collecting, and analyzing emotion data. Furthermore, we provide an introductory level explanation of the machine learning, deep learning, and computer vision algorithms that underlie most emotion detection programs in order to improve literacy of explainable artificial intelligence in the social and behavioral science literature.

RevDate: 2025-02-13

Guturu H, Nichols A, Cantrell LS, et al (2025)

Cloud-Enabled Scalable Analysis of Large Proteomics Cohorts.

Journal of proteome research [Epub ahead of print].

Rapid advances in depth and throughput of untargeted mass-spectrometry-based proteomic technologies enable large-scale cohort proteomic and proteogenomic analyses. As such, the data infrastructure and search engines required to process data must also scale. This challenge is amplified in search engines that rely on library-free match between runs (MBR) search, which enable enhanced depth-per-sample and data completeness. However, to date, no MBR-based search could scale to process cohorts of thousands or more individuals. Here, we present a strategy to deploy search engines in a distributed cloud environment without source code modification, thereby enhancing resource scalability and throughput. Additionally, we present an algorithm, Scalable MBR, that replicates the MBR procedure of popular DIA-NN software for scalability to thousands of samples. We demonstrate that Scalable MBR can search thousands of MS raw files in a few hours compared to days required for the original DIA-NN MBR procedure and demonstrate that the results are almost indistinguishable to those of DIA-NN native MBR. We additionally show that empirical spectra generated by Scalable MBR better approximates DIA-NN native MBR compared to semiempirical alternatives such as ID-RT-IM MBR, preserving user choice to use empirical libraries in large cohort analysis. The method has been tested to scale to over 15,000 injections and is available for use in the Proteograph Analysis Suite.

RevDate: 2025-02-15

Li H, H Chung (2025)

Prediction of Member Forces of Steel Tubes on the Basis of a Sensor System with the Use of AI.

Sensors (Basel, Switzerland), 25(3):.

The rapid development of AI (artificial intelligence), sensor technology, high-speed Internet, and cloud computing has demonstrated the potential of data-driven approaches in structural health monitoring (SHM) within the field of structural engineering. Algorithms based on machine learning (ML) models are capable of discerning intricate structural behavioral patterns from real-time data gathered by sensors, thereby offering solutions to engineering quandaries in structural mechanics and SHM. This study presents an innovative approach based on AI and a fiber-reinforced polymer (FRP) double-helix sensor system for the prediction of forces acting on steel tube members in offshore wind turbine support systems; this enables structural health monitoring of the support system. The steel tube as the transitional member and the FRP double helix-sensor system were initially modeled in three dimensions using ABAQUS finite element software. Subsequently, the data obtained from the finite element analysis (FEA) were inputted into a fully connected neural network (FCNN) model, with the objective of establishing a nonlinear mapping relationship between the inputs (strain) and the outputs (reaction force). In the FCNN model, the impact of the number of input variables on the model's predictive performance is examined through cross-comparison of different combinations and positions of the six sets of input variables. And based on an evaluation of engineering costs and the number of strain sensors, a series of potential combinations of variables are identified for further optimization. Furthermore, the potential variable combinations were optimized using a convolutional neural network (CNN) model, resulting in optimal input variable combinations that achieved the accuracy level of more input variable combinations with fewer sensors. This not only improves the prediction performance of the model but also effectively controls the engineering cost. The model performance was evaluated using several metrics, including R[2], MSE, MAE, and SMAPE. The results demonstrated that the CNN model exhibited notable advantages in terms of fitting accuracy and computational efficiency when confronted with a limited data set. To provide further support for practical applications, an interactive graphical user interface (GUI)-based sensor-coupled mechanical prediction system for steel tubes was developed. This system enables engineers to predict the member forces of steel tubes in real time, thereby enhancing the efficiency and accuracy of SHM for offshore wind turbine support systems.

RevDate: 2025-02-15

Alboqmi R, RF Gamble (2025)

Enhancing Microservice Security Through Vulnerability-Driven Trust in the Service Mesh Architecture.

Sensors (Basel, Switzerland), 25(3):.

Cloud-native computing enhances the deployment of microservice architecture (MSA) applications by improving scalability and resilience, particularly in Beyond 5G (B5G) environments such as Sixth-Generation (6G) networks. This is achieved through the ability to replace traditional hardware dependencies with software-defined solutions. While service meshes enable secure communication for deployed MSAs, they struggle to identify vulnerabilities inherent to microservices. The reliance on third-party libraries and modules, essential for MSAs, introduces significant supply chain security risks. Implementing a zero-trust approach for MSAs requires robust mechanisms to continuously verify and monitor the software supply chain of deployed microservices. However, existing service mesh solutions lack runtime trust evaluation capabilities for continuous vulnerability assessment of third-party libraries and modules. This paper introduces a mechanism for continuous runtime trust evaluation of microservices, integrating vulnerability assessments within a service mesh to enhance the deployed MSA application. The proposed approach dynamically assigns trust scores to deployed microservices, rewarding secure practices such as timely vulnerability patching. It also enables the sharing of assessment results, enhancing mitigation strategies across the deployed MSA application. The mechanism is evaluated using the Train Ticket MSA, a complex open-source benchmark MSA application deployed with Docker containers, orchestrated using Kubernetes, and integrated with the Istio service mesh. Results demonstrate that the enhanced service mesh effectively supports dynamic trust evaluation based on the vulnerability posture of deployed microservices, significantly improving MSA security and paving the way for future self-adaptive solutions.

RevDate: 2025-02-15
CmpDate: 2025-02-13

Abushark YB, Hassan S, AI Khan (2025)

Optimized Adaboost Support Vector Machine-Based Encryption for Securing IoT-Cloud Healthcare Data.

Sensors (Basel, Switzerland), 25(3):.

The Internet of Things (IoT) connects various medical devices that enable remote monitoring, which can improve patient outcomes and help healthcare providers deliver precise diagnoses and better service to patients. However, IoT-based healthcare management systems face significant challenges in data security, such as maintaining a triad of confidentiality, integrity, and availability (CIA) and securing data transmission. This paper proposes a novel AdaBoost support vector machine (ASVM) based on the grey wolf optimization and international data encryption algorithm (ASVM-based GWO-IDEA) to secure medical data in an IoT-enabled healthcare system. The primary objective of this work was to prevent possible cyberattacks, unauthorized access, and tampering with the security of such healthcare systems. The proposed scheme encodes the healthcare data before transmitting them, protecting them from unauthorized access and other network vulnerabilities. The scheme was implemented in Python, and its efficiency was evaluated using a Kaggle-based public healthcare dataset. The performance of the model/scheme was evaluated with existing strategies in the context of effective security parameters, such as the confidentiality rate and throughput. When using the suggested methodology, the data transmission process was improved and achieved a high throughput of 97.86%, an improved resource utilization degree of 98.45%, and a high efficiency of 93.45% during data transmission.

RevDate: 2025-02-15

Mahedero Biot F, Fornes-Leal A, Vaño R, et al (2025)

A Novel Orchestrator Architecture for Deploying Virtualized Services in Next-Generation IoT Computing Ecosystems.

Sensors (Basel, Switzerland), 25(3):.

The Next-Generation IoT integrates diverse technological enablers, allowing the creation of advanced systems with increasingly complex requirements and maximizing the use of available IoT-edge-cloud resources. This paper introduces an orchestrator architecture for dynamic IoT scenarios, inspired by ETSI NFV MANO and Cloud Native principles, where distributed computing nodes often have unfixed and changing networking configurations. Unlike traditional approaches, this architecture also focuses on managing services across massively distributed mobile nodes, as demonstrated in the automotive use case presented. Apart from working as MANO framework, the proposed solution efficiently handles service lifecycle management in large fleets of vehicles without relying on public or static IP addresses for connectivity. Its modular, microservices-based approach ensures adaptability to emerging trends like Edge Native, WebAssembly and RISC-V, positioning it as a forward-looking innovation for IoT ecosystems.

RevDate: 2025-02-15
CmpDate: 2025-02-13

Khan FU, Shah IA, Jan S, et al (2025)

Machine Learning-Based Resource Management in Fog Computing: A Systematic Literature Review.

Sensors (Basel, Switzerland), 25(3):.

This systematic literature review analyzes machine learning (ML)-based techniques for resource management in fog computing. Utilizing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, this paper focuses on ML and deep learning (DL) solutions. Resource management in the fog computing domain was thoroughly analyzed by identifying the key factors and constraints. A total of 68 research papers of extended versions were finally selected and included in this study. The findings highlight a strong preference for DL in addressing resource management challenges within a fog computing paradigm, i.e., 66% of the reviewed articles leveraged DL techniques, while 34% utilized ML. Key factors such as latency, energy consumption, task scheduling, and QoS are interconnected and critical for resource management optimization. The analysis reveals that latency, energy consumption, and QoS are the prime factors addressed in the literature on ML-based fog computing resource management. Latency is the most frequently addressed parameter, investigated in 77% of the articles, followed by energy consumption and task scheduling at 44% and 33%, respectively. Furthermore, according to our evaluation, an extensive range of challenges, i.e., computational resource and latency, scalability and management, data availability and quality, and model complexity and interpretability, are addressed by employing 73, 53, 45, and 46 ML/DL techniques, respectively.

RevDate: 2025-02-15

Ogwara NO, Petrova K, Yang MLB, et al (2025)

MINDPRES: A Hybrid Prototype System for Comprehensive Data Protection in the User Layer of the Mobile Cloud.

Sensors (Basel, Switzerland), 25(3):.

Mobile cloud computing (MCC) is a technological paradigm for providing services to mobile device (MD) users. A compromised MD may cause harm to both its user and to other MCC customers. This study explores the use of machine learning (ML) models and stochastic methods for the protection of Android MDs connected to the mobile cloud. To test the validity and feasibility of the proposed models and methods, the study adopted a proof-of-concept approach and developed a prototype system named MINDPRESS. The static component of MINDPRES assesses the risk of the apps installed on the MD. It uses a device-based ML model for static feature analysis and a cloud-based stochastic risk evaluator. The device-based hybrid component of MINDPRES monitors app behavior in real time. It deploys two ML models and functions as an intrusion detection and prevention system (IDPS). The performance evaluation results of the prototype showed that the accuracy achieved by the methods for static and hybrid risk evaluation compared well with results reported in recent work. Power consumption data indicated that MINDPRES did not create an overload. This study contributes a feasible and scalable framework for building distributed systems for the protection of the data and devices of MCC customers.

RevDate: 2025-02-15

Cabrera VE, Bewley J, Breunig M, et al (2025)

Data Integration and Analytics in the Dairy Industry: Challenges and Pathways Forward.

Animals : an open access journal from MDPI, 15(3):.

The dairy industry faces significant challenges in data integration and analysis, which are critical for informed decision-making, operational optimization, and sustainability. Data integration-combining data from diverse sources, such as herd management systems, sensors, and diagnostics-remains difficult due to the lack of standardization, infrastructure barriers, and proprietary concerns. This commentary explores these issues based on insights from a multidisciplinary group of stakeholders, including industry experts, researchers, and practitioners. Key challenges discussed include the absence of a national animal identification system in the US, high IT resource costs, reluctance to share data due to competitive disadvantages, and differences in global data handling practices. Proposed pathways forward include developing comprehensive data integration guidelines, enhancing farmer awareness through training programs, and fostering collaboration across industry, academia, and technology providers. Additional recommendations involve improving data exchange standards, addressing interoperability issues, and leveraging advanced technologies, such as artificial intelligence and cloud computing. Emphasis is placed on localized data integration solutions for farm-level benefits and broader research applications to advance sustainability, traceability, and profitability within the dairy supply chain. These outcomes provide a foundation for achieving streamlined data systems, enabling actionable insights, and fostering innovation in the dairy industry.

RevDate: 2025-02-11

Bhat SN, Jindal GD, GD Nagare (2024)

Development and Validation of Cloud-based Heart Rate Variability Monitor.

Journal of medical physics, 49(4):654-660.

CONTEXT: This article introduces a new cloud-based point-of-care system to monitor heart rate variability (HRV).

AIMS: Medical investigations carried out at dispensaries or hospitals impose substantial physiological and psychological stress (white coat effect), disrupting cardiovascular homeostasis, which can be taken care by point-of-care cloud computing system to facilitate secure patient monitoring.

SETTINGS AND DESIGN: The device employs MAX30102 sensor to collect peripheral pulse signal using photoplethysmography technique. The non-invasive design ensures patient compliance while delivering critical insights into Autonomic Nervous System activity. Preliminary validations indicate the system's potential to enhance clinical outcomes by supporting timely, data-driven therapeutic adjustments based on HRV metrics.

SUBJECTS AND METHODS: This article explores the system's development, functionality, and reliability. System designed is validated with peripheral pulse analyzer (PPA), a research product of electronics division, Bhabha Atomic Research Centre.

STATISTICAL ANALYSIS USED: The output of developed HRV monitor (HRVM) is compared using Pearson's correlation and Mann-Whitney U-test with output of PPA. Peak positions and spectrum values are validated using Pearson's correlation, mean error, standard deviation (SD) of error, and range of error. HRV parameters such as total power, mean, peak amplitude, and power in very low frequency, low frequency, and high frequency bands are validated using Mann-Whitney U-test.

RESULTS: Pearson's correlation for spectrum values has been found to be more than 0.97 in all the subjects. Mean error, SD of error, and range of error are found to be in acceptable range.

CONCLUSIONS: Statistical results validate the new HRVM system against PPA for use in cloud computing and point-of-care testing.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )