picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
25 Jan 2025 at 01:40
HITS:
3868
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 25 Jan 2025 at 01:40 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-01-24

Tang Y, Guo M, Li B, et al (2024)

Flexible Threshold Quantum Homomorphic Encryption on Quantum Networks.

Entropy (Basel, Switzerland), 27(1): pii:e27010007.

Currently, most quantum homomorphic encryption (QHE) schemes only allow a single evaluator (server) to accomplish computation tasks on encrypted data shared by the data owner (user). In addition, the quantum computing capability of the evaluator and the scope of quantum computation it can perform are usually somewhat limited, which significantly reduces the flexibility of the scheme in quantum network environments. In this paper, we propose a novel (t,n)-threshold QHE (TQHE) network scheme based on the Shamir secret sharing protocol, which allows k(t≤k≤n) evaluators to collaboratively perform evaluation computation operations on each qubit within the shared encrypted sequence. Moreover, each evaluator, while possessing the ability to perform all single-qubit unitary operations, is able to perform arbitrary single-qubit gate computation task assigned by the data owner. We give a specific (3, 5)-threshold example, illustrating the scheme's correctness and feasibility, and simulate it on IBM quantum computing cloud platform. Finally, it is shown that the scheme is secure by analyzing encryption/decryption private keys, ciphertext quantum state sequences during transmission, plaintext quantum state sequence, and the result after computations on the plaintext quantum state sequence.

RevDate: 2025-01-24

Kwon K, Lee YJ, Chung S, et al (2025)

Full Body-Worn Textile-Integrated Nanomaterials and Soft Electronics for Real-Time Continuous Motion Recognition Using Cloud Computing.

ACS applied materials & interfaces [Epub ahead of print].

Recognizing human body motions opens possibilities for real-time observation of users' daily activities, revolutionizing continuous human healthcare and rehabilitation. While some wearable sensors show their capabilities in detecting movements, no prior work could detect full-body motions with wireless devices. Here, we introduce a soft electronic textile-integrated system, including nanomaterials and flexible sensors, which enables real-time detection of various full-body movements using the combination of a wireless sensor suit and deep-learning-based cloud computing. This system includes an array of a nanomembrane, laser-induced graphene strain sensors, and flexible electronics integrated with textiles for wireless detection of different body motions and workouts. With multiple human subjects, we demonstrate the system's performance in real-time prediction of eight different activities, including resting, walking, running, squatting, walking upstairs, walking downstairs, push-ups, and jump roping, with an accuracy of 95.3%. The class of technologies, integrated as full body-worn textile electronics and interactive pairing with smartwatches and portable devices, can be used in real-world applications such as ambulatory health monitoring via conjunction with smartwatches and feedback-enabled customized rehabilitation workouts.

RevDate: 2025-01-23

Novais JJM, Melo BMD, Neves Junior AF, et al (2025)

Online analysis of Amazon's soils through reflectance spectroscopy and cloud computing can support policies and the sustainable development.

Journal of environmental management, 375:124155 pii:S0301-4797(25)00131-8 [Epub ahead of print].

Analyzing soil in large and remote areas such as the Amazon River Basin (ARB) is unviable when it is entirely performed by wet labs using traditional methods due to the scarcity of labs and the significant workforce requirements, increasing costs, time, and waste. Remote sensing, combined with cloud computing, enhances soil analysis by modeling soil from spectral data and overcoming the limitations of traditional methods. We verified the potential of soil spectroscopy in conjunction with cloud-based computing to predict soil organic carbon (SOC) and particle size (sand, silt, and clay) content from the Amazon region. To this end, we request physicochemical attribute values determined by wet laboratory analyses of 211 soil samples from the ARB. These samples were submitted to spectroscopy Vis-NIR-SWIR in the laboratory. Two approaches modeled the soil attributes: M-I) cloud-computing-based using the Brazilian Soil Spectral Service (BraSpecS) platform, and M-II) computing-based in an offline environment using R programming language. Both methods used the Cubist machine learning algorithm for modeling. The coefficient of determination (R[2]), mean absolute error (MAE) and root mean squared error (RMSE) served as criteria for performance assessment. The soil attributes prediction was highly consistent, considering the measured and predicted by both approaches M-I and M-II. The M-II outperformed the M-I in predicting both particle size and SOC. For clay content, the offline model achieved an R[2] of 0.85, with an MAE of 86.16 g kg[-][1] and RMSE of 111.73 g kg[-][1], while the online model had an R[2] of 0.70, MAE of 111.73 g kg[-][1], and RMSE of 144.19 g kg[-][1]. For SOC, the offline model also showed better performance, with an R[2] of 0.81, MAE of 3.42 g kg[-][1], and RMSE of 4.57 g kg[-][1], compared to an R[2] of 0.72, MAE of 3.66 g kg[-][1], and RMSE of 5.53 g kg[-][1] for the M-I. Both modeling methods demonstrated the power of reflectance spectroscopy and cloud computing to survey soils in remote and large areas such as ARB. The synergetic use of these techniques can support policies and sustainable development.

RevDate: 2025-01-23
CmpDate: 2025-01-23

Seth M, Jalo H, Högstedt Å, et al (2025)

Technologies for Interoperable Internet of Medical Things Platforms to Manage Medical Emergencies in Home and Prehospital Care: Scoping Review.

Journal of medical Internet research, 27:e54470 pii:v27i1e54470.

BACKGROUND: The aging global population and the rising prevalence of chronic disease and multimorbidity have strained health care systems, driving the need for expanded health care resources. Transitioning to home-based care (HBC) may offer a sustainable solution, supported by technological innovations such as Internet of Medical Things (IoMT) platforms. However, the full potential of IoMT platforms to streamline health care delivery is often limited by interoperability challenges that hinder communication and pose risks to patient safety. Gaining more knowledge about addressing higher levels of interoperability issues is essential to unlock the full potential of IoMT platforms.

OBJECTIVE: This scoping review aims to summarize best practices and technologies to overcome interoperability issues in IoMT platform development for prehospital care and HBC.

METHODS: This review adheres to a protocol published in 2022. Our literature search followed a dual search strategy and was conducted up to August 2023 across 6 electronic databases: IEEE Xplore, PubMed, Scopus, ACM Digital Library, Sage Journals, and ScienceDirect. After the title, abstract, and full-text screening performed by 2 reviewers, 158 articles were selected for inclusion. To answer our 2 research questions, we used 2 models defined in the protocol: a 6-level interoperability model and a 5-level IoMT reference model. Data extraction and synthesis were conducted through thematic analysis using Dedoose. The findings, including commonly used technologies and standards, are presented through narrative descriptions and graphical representations.

RESULTS: The primary technologies and standards reported for interoperable IoMT platforms in prehospital care and HBC included cloud computing (19/30, 63%), representational state transfer application programming interfaces (REST APIs; 17/30, 57%), Wi-Fi (17/30, 57%), gateways (15/30, 50%), and JSON (14/30, 47%). Message queuing telemetry transport (MQTT; 7/30, 23%) and WebSocket (7/30, 23%) were commonly used for real-time emergency alerts, while fog and edge computing were often combined with cloud computing for enhanced processing power and reduced latencies. By contrast, technologies associated with higher interoperability levels, such as blockchain (2/30, 7%), Kubernetes (3/30, 10%), and openEHR (2/30, 7%), were less frequently reported, indicating a focus on lower level of interoperability in most of the included studies (17/30, 57%).

CONCLUSIONS: IoMT platforms that support higher levels of interoperability have the potential to deliver personalized patient care, enhance overall patient experience, enable early disease detection, and minimize time delays. However, our findings highlight a prevailing emphasis on lower levels of interoperability within the IoMT research community. While blockchain, microservices, Docker, and openEHR are described as suitable solutions in the literature, these technologies seem to be seldom used in IoMT platforms for prehospital care and HBC. Recognizing the evident benefit of cross-domain interoperability, we advocate a stronger focus on collaborative initiatives and technologies to achieve higher levels of interoperability.

RR2-10.2196/40243.

RevDate: 2025-01-20

Ali A, Hussain B, Hissan RU, et al (2025)

Examining the landscape transformation and temperature dynamics in Pakistan.

Scientific reports, 15(1):2575.

This study aims to examine the landscape transformation and temperature dynamics using multiple spectral indices. The processes of temporal fluctuations in the land surface temperature is strongly related to the morphological features of the area in which the temperature is determined, and the given factors significantly affect the thermal properties of the surface. This research is being conducted in Pakistan to identify the vegetation cover, water bodies, impervious surfaces, and land surface temperature using decadal remote sensing data with four intervals during 1993-2023 in the Mardan division, Khyber Pakhtunkhwa. To analyze the landscape transformation and temperature dynamics, the study used spectral indices including Land Surface Temperature, Normalized Difference Vegetation Index, Normalized Difference Water Index, Normalized Difference Built-up Index, and Normalized Difference Bareness Index by employing Google Earth Engine cloud computing platform. The results suggest that there are differences in the type of land surface temperature, ranging from 15.58 °C to 43.71 °C during the study period. Nevertheless, larger fluctuations in land surface temperature were found in the cover and protective forests of the study area, especially in the northwestern and southeastern parts of the system. These results highlighted the complexity of the relationship between land surface temperature and spectral indices regarding the need for spectral indices.

RevDate: 2025-01-18

Soman VK, V Natarajan (2025)

Crayfish optimization based pixel selection using block scrambling based encryption for secure cloud computing environment.

Scientific reports, 15(1):2406.

Cloud Computing (CC) is a fast emerging field that enables consumers to access network resources on-demand. However, ensuring a high level of security in CC environments remains a significant challenge. Traditional encryption algorithms are often inadequate in protecting confidential data, especially digital images, from complex cyberattacks. The increasing reliance on cloud storage and transmission of digital images has made it essential to develop strong security measures to stop unauthorized access and guarantee the integrity of sensitive information. This paper presents a novel Crayfish Optimization based Pixel Selection using Block Scrambling Based Encryption Approach (CFOPS-BSBEA) technique that offers a unique solution to improve security in cloud environments. By integrating steganography and encryption, the CFOPS-BSBEA technique provides a robust approach to secure digital images. Our key contribution lies in the development of a three-stage process that optimally selects pixels for steganography, encodes secret images using Block Scrambling Based Encryption, and embeds them in cover images. The CFOPS-BSBEA technique leverages the strengths of both steganography and encryption to provide a secure and effective approach to digital image protection. The Crayfish Optimization algorithm is used to select the most suitable pixels for steganography, ensuring that the secret image is embedded in a way that minimizes detection. The Block Scrambling Based Encryption algorithm is then used to encode the secret image, providing an additional layer of security. Experimental results show that the CFOPS-BSBEA technique outperforms existing models in terms of security performance. The proposed approach has significant implications for the secure storage and transmission of digital images in cloud environments, and its originality and novelty make it an attractive contribution to the field. Furthermore, the CFOPS-BSBEA technique has the potential to inspire further research in secure cloud computing environments, making the way for the development of more robust and efficient security measures.

RevDate: 2025-01-17

Kari Balakrishnan A, Chellaperumal A, Lakshmanan S, et al (2025)

A novel efficient data storage and data auditing in cloud environment using enhanced child drawing development optimization strategy.

Network (Bristol, England) [Epub ahead of print].

The optimization on the cloud-based data structures is carried out using Adaptive Level and Skill Rate-based Child Drawing Development Optimization algorithm (ALSR-CDDO). Also, the overall cost required in computing and communicating is reduced by optimally selecting these data structures by the ALSR-CDDO algorithm. The storage of the data in the cloud platform is performed using the Divide and Conquer Table (D&CT). The location table and the information table are generated using the D&CT method. The details, such as the file information, file ID, version number, and user ID, are all present in the information table. Every time data is deleted or updated, and its version number is modified. Whenever an update takes place using D&CT, the location table also gets upgraded. The information regarding the location of a file in the Cloud Service Provider (CSP) is given in the location table. Once the data is stored in the CSP, the auditing of the data is then performed on the stored data. Both dynamic and batch auditing are carried out on the stored data, even if it gets updated dynamically in the CSP. The security offered by the executed scheme is verified by contrasting it with other existing auditing schemes.

RevDate: 2025-01-15

Yan K, Yu X, Liu J, et al (2025)

HiQ-FPAR: A High-Quality and Value-added MODIS Global FPAR Product from 2000 to 2023.

Scientific data, 12(1):72.

The Fraction of Absorbed Photosynthetically Active Radiation (FPAR) is essential for assessing vegetation's photosynthetic efficiency and ecosystem energy balance. While the MODIS FPAR product provides valuable global data, its reliability is compromised by noise, particularly under poor observation conditions like cloud cover. To solve this problem, we developed the Spatio-Temporal Information Composition Algorithm (STICA), which enhances MODIS FPAR by integrating quality control, spatio-temporal correlations, and original FPAR values, resulting in the High-Quality FPAR (HiQ-FPAR) product. HiQ-FPAR shows superior accuracy compared to MODIS FPAR and Sensor-Independent FPAR (SI-FPAR), with RMSE values of 0.130, 0.154, and 0.146, respectively, and R[2] values of 0.722, 0.630, and 0.717. Additionally, HiQ-FPAR exhibits smoother time series in 52.1% of global areas, compared to 44.2% for MODIS. Available on Google Earth Engine and Zenodo, the HiQ-FPAR dataset offers 500 m and 5 km resolution at an 8-day interval from 2000 to 2023, supporting a wide range of FPAR applications.

RevDate: 2025-01-13

Rushton CE, Tate JE, Å Sjödin (2025)

A modern, flexible cloud-based database and computing service for real-time analysis of vehicle emissions data.

Urban informatics, 4(1):1.

In response to the demand for advanced tools in environmental monitoring and policy formulation, this work leverages modern software and big data technologies to enhance novel road transport emissions research. This is achieved by making data and analysis tools more widely available and customisable so users can tailor outputs to their requirements. Through the novel combination of vehicle emissions remote sensing and cloud computing methodologies, these developments aim to reduce the barriers to understanding real-driving emissions (RDE) across urban environments. The platform demonstrates the practical application of modern cloud-computing resources in overcoming the complex demands of air quality management and policy monitoring. This paper shows the potential of modern technological solutions to improve the accessibility of environmental data for policy-making and the broader pursuit of sustainable urban development. The web-application is publicly and freely available at https://cares-public-app.azurewebsites.net.

RevDate: 2025-01-11

Ahmed AA, Farhan K, Ninggal MIH, et al (2024)

Retrieving and Identifying Remnants of Artefacts on Local Devices Using Sync.com Cloud.

Sensors (Basel, Switzerland), 25(1): pii:s25010106.

Most current research in cloud forensics is focused on tackling the challenges encountered by forensic investigators in identifying and recovering artifacts from cloud devices. These challenges arise from the diverse array of cloud service providers as each has its distinct rules, guidelines, and requirements. This research proposes an investigation technique for identifying and locating data remnants in two main stages: artefact collection and evidence identification. In the artefacts collection stage, the proposed technique determines the location of the artefacts in cloud storage and collects them for further investigation in the next stage. In the evidence identification stage, the collected artefacts are investigated to identify the evidence relevant to the cybercrime currently being investigated. These two stages perform an integrated process for mitigating the difficulty of locating the artefacts and reducing the time of identifying the relevant evidence. The proposed technique is implemented and tested by applying a forensics investigation algorithm on Sync.com cloud storage using the Microsoft Windows 10 operating system.

RevDate: 2025-01-10

Hoyer I, Utz A, Hoog Antink C, et al (2025)

tinyHLS: a novel open source high level synthesis tool targeting hardware accelerators for artificial neural network inference.

Physiological measurement [Epub ahead of print].

OBJECTIVE: In recent years, wearable devices such as smartwatches and smart patches have revolutionized biosignal acquisition and analysis, particularly for monitoring electrocardiography (ECG). However, the limited power supply of these devices often precludes real-time data analysis on the patch itself.

APPROACH: This paper introduces a novel Python package, tinyHLS (High Level Synthesis), designed to address these challenges by converting Python-based AI models into platform-independent hardware description language (HDL) code accelerators. Specifically designed for convolutional neural networks (CNNs), tinyHLS seamlessly integrates into the AI developer's workflow in Python TensorFlow Keras. Our methodology leverages a template-based hardware compiler that ensures flexibility, efficiency, and ease of use. In this work, tinyHLS is first-published featuring templates for several layers of neural networks, such as dense, convolution, max and global average pooling. In the first version, rectified linear unit (ReLU) is supported as activation. It targets one-dimensional data, with a particular focus on time series data.

MAIN RESULTS: The generated accelerators are validated in detecting atrial fibrillation (AF) on electrocardiogram (ECG) data, demonstrating significant improvements in processing speed (62-fold) and energy efficiency (4.5-fold). Quality of code and synthesizability are ensured by validating the outputs with commercial ASIC design tools.

SIGNIFICANCE: Importantly, tinyHLS is open-source and does not rely on commercial tools, making it a versatile solution for both academic and commercial applications. The paper also discusses the integration with an open-source RISCV and potential for future enhancements of tinyHLS, including its application in edge servers and cloud computing. The source code is available on GitHub: https://github.com/Fraunhofer-IMS/tinyHLS.

RevDate: 2025-01-10

Scales C, Bai J, Murakami D, et al (2025)

Internal validation of a convolutional neural network pipeline for assessing meibomian gland structure from meibography.

Optometry and vision science : official publication of the American Academy of Optometry pii:00006324-990000000-00246 [Epub ahead of print].

SIGNIFICANCE: Optimal meibography utilization and interpretation are hindered due to poor lid presentation, blurry images, or image artifacts and the challenges of applying clinical grading scales. These results, using the largest image dataset analyzed to date, demonstrate development of algorithms that provide standardized, real-time inference that addresses all of these limitations.

PURPOSE: This study aimed to develop and validate an algorithmic pipeline to automate and standardize meibomian gland absence assessment and interpretation.

METHODS: A total of 143,476 images were collected from sites across North America. Ophthalmologist and optometrist experts established ground-truth image quality and quantification (i.e., degree of gland absence). Annotated images were allocated into training, validation, and test sets. Convolutional neural networks within Google Cloud VertexAI trained three locally deployable or edge-based predictive models: image quality detection, over-flip detection, and gland absence detection. The algorithms were combined into an algorithmic pipeline onboard a LipiScan Dynamic Meibomian Imager to provide real-time clinical inference for new images. Performance metrics were generated for each algorithm in the pipeline onboard the LipiScan from naive image test sets.

RESULTS: Individual model performance metrics included the following: weighted average precision (image quality detection: 0.81, over-flip detection: 0.88, gland absence detection: 0.84), weighted average recall (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), weighted average F1 score (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.81), overall accuracy (image quality detection: 0.80, over-flip detection: 0.87, gland absence detection: 0.80), Cohen κ (image quality detection: 0.60, over-flip detection: 0.62, and gland absence detection: 0.71), Kendall τb (image quality detection: 0.61, p<0.001, over-flip detection: 0.63, p<0.001, and gland absence detection: 0.67, p<001), and Matthews coefficient (image quality detection: 0.61, over-flip detection: 0.63, and gland absence detection: 0.62). Area under the precision-recall curve (image quality detection: 0.87 over-flip detection: 0.92, gland absence detection: 0.89) and area under the receiver operating characteristic curve (image quality detection: 0.88, over-flip detection: 0.91 gland absence detection: 0.93) were calculated across a common set of thresholds, ranging from 0 to 1.

CONCLUSIONS: Comparison of predictions from each model to expert panel ground-truth demonstrated strong association and moderate to substantial agreement. The findings and performance metrics show that the pipeline of algorithms provides standardized, real-time inference/prediction of meibomian gland absence.

RevDate: 2025-01-10
CmpDate: 2025-01-10

Lu C, Zhou J, Q Zou (2025)

An optimized approach for container deployment driven by a two-stage load balancing mechanism.

PloS one, 20(1):e0317039 pii:PONE-D-24-28787.

Lightweight container technology has emerged as a fundamental component of cloud-native computing, with the deployment of containers and the balancing of loads on virtual machines representing significant challenges. This paper presents an optimization strategy for container deployment that consists of two stages: coarse-grained and fine-grained load balancing. In the initial stage, a greedy algorithm is employed for coarse-grained deployment, facilitating the distribution of container services across virtual machines in a balanced manner based on resource requests. The subsequent stage utilizes a genetic algorithm for fine-grained resource allocation, ensuring an equitable distribution of resources to each container service on a single virtual machine. This two-stage optimization enhances load balancing and resource utilization throughout the system. Empirical results indicate that this approach is more efficient and adaptable in comparison to the Grey Wolf Optimization (GWO) Algorithm, the Simulated Annealing (SA) Algorithm, and the GWO-SA Algorithm, significantly improving both resource utilization and load balancing performance on virtual machines.

RevDate: 2025-01-09
CmpDate: 2025-01-09

Kuang Y, Cao D, Jiang D, et al (2024)

CPhaMAS: The first pharmacokinetic analysis cloud platform developed by China.

Zhong nan da xue xue bao. Yi xue ban = Journal of Central South University. Medical sciences, 49(8):1290-1300.

OBJECTIVES: Software for pharmacological modeling and statistical analysis is essential for drug development and individualized treatment modeling. This study aims to develop a pharmacokinetic analysis cloud platform that leverages cloud-based benefits, offering a user-friendly interface with a smoother learning curve.

METHODS: The platform was built using Rails as the framework, developed in Julia language, and employs PostgreSQL 14 database, Redis cache, and Sidekiq for asynchronous task management. Four commonly used modules in clinical pharmacology research were developed: Non-compartmental analysis, bioequivalence/bioavailability analysis, compartment model analysis, and population pharmacokinetics modeling. The platform ensured comprehensive data security and traceability through multiple safeguards, including data encryption, access control, transmission encryption, redundant backups, and log management. The platform underwent basic function, performance, reliability, usability, and scalability testing, along with practical case studies.

RESULTS: The CPhaMAS cloud platform successfully implemented the 4 module functionalities. The platform provides a list-based navigation for users, featuring checkbox-style interactions. Through cloud computing, it allows direct online data analysis, saving computer storage and minimizing performance requirements. Modeling and visualization do not require programming knowledge. Basic functionality achieved 100% completion, with an average annual uptime of over 99%. Server response time was between 200 to 500 ms, and average CPU usage was maintained below 30%. In a practical case study, cefotaxime sodium/tazobactam sodium injection (6꞉1 ratio) displayd near-linear pharmacokinetics within a dose range of 1.0 to 4.0 g, with no significant effect of tazobactam on the pharmacokinetic parameters of cefotaxime, validating the platform's usability and reliability.

CONCLUSIONS: CPhaMAS provides an integrated modeling and statistical tool for educators, researchers, and industrial professionals, enabling non-compartmental analysis, bioequivalence/bioavailability analysis, compartmental model building, and population pharmacokinetic modeling and simulation.

RevDate: 2025-01-09

Peng W, Hong Y, Chen Y, et al (2025)

AIScholar: An OpenFaaS-enhanced cloud platform for intelligent medical data analytics.

Computers in biology and medicine, 186:109648 pii:S0010-4825(24)01733-5 [Epub ahead of print].

This paper presents AIScholar, an intelligent research cloud platform developed based on artificial intelligence analysis methods and the OpenFaaS serverless framework, designed for intelligent analysis of clinical medical data with high scalability. AIScholar simplifies the complex analysis process by encapsulating a wide range of medical data analytics methods into a series of customizable cloud tools that emphasize ease of use and expandability, within OpenFaaS's serverless computing framework. As a multifaceted auxiliary tool in medical scientific exploration, AIScholar accelerates the deployment of computational resources, enabling clinicians and scientific personnel to derive new insights from clinical medical data with unprecedented efficiency. A case study focusing on breast cancer clinical data underscores the practicality that AIScholar offers to clinicians for diagnosis and decision-making. Insights generated by the platform have a direct impact on the physicians' ability to identify and address clinical issues, signifying its real-world application significance in clinical practice. Consequently, AIScholar makes a meaningful impact on medical research and clinical practice by providing powerful analytical tools to clinicians and scientific personnel, thereby promoting significant advancements in the analysis of clinical medical data.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Nolasco M, M Balzarini (2025)

Assessment of temporal aggregation of Sentinel-2 images on seasonal land cover mapping and its impact on landscape metrics.

Environmental monitoring and assessment, 197(2):142.

Landscape metrics (LM) play a crucial role in fields such as urban planning, ecology, and environmental research, providing insights into the ecological and functional dynamics of ecosystems. However, in dynamic systems, generating thematic maps for LM analysis poses challenges due to the substantial data volume required and issues such as cloud cover interruptions. The aim of this study was to compare the accuracy of land cover maps produced by three temporal aggregation methods: median reflectance, maximum normalised difference vegetation index (NDVI), and a two-date image stack using Sentinel-2 (S2) and then to analyse their implications for LM calculation. The Google Earth Engine platform facilitated data filtering, image selection, and aggregation. A random forest algorithm was employed to classify five land cover classes across ten sites, with classification accuracy assessed using global measurements and the Kappa index. LM were then quantified. The analysis revealed that S2 data provided a high-quality, cloud-free dataset suitable for analysis, ensuring a minimum of 25 cloud-free pixels over the study period. The two-date and median methods exhibited superior land cover classification accuracy compared to the max NDVI method. In particular, the two-date method resulted in lower fragmentation-heterogeneity and complexity metrics in the resulting maps compared to the median and max NDVI methods. Nevertheless, the median method holds promise for integration into operational land cover mapping programmes, particularly for larger study areas exceeding the width of S2 swath coverage. We find patch density combined with conditional entropy to be particularly useful metrics for assessing fragmentation and configuration complexity.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Saeed A, A Khan M, Akram U, et al (2025)

Deep learning based approaches for intelligent industrial machinery health management and fault diagnosis in resource-constrained environments.

Scientific reports, 15(1):1114.

Industry 4.0 represents the fourth industrial revolution, which is characterized by the incorporation of digital technologies, the Internet of Things (IoT), artificial intelligence, big data, and other advanced technologies into industrial processes. Industrial Machinery Health Management (IMHM) is a crucial element, based on the Industrial Internet of Things (IIoT), which focuses on monitoring the health and condition of industrial machinery. The academic community has focused on various aspects of IMHM, such as prognostic maintenance, condition monitoring, estimation of remaining useful life (RUL), intelligent fault diagnosis (IFD), and architectures based on edge computing. Each of these categories holds its own significance in the context of industrial processes. In this survey, we specifically examine the research on RUL prediction, edge-based architectures, and intelligent fault diagnosis, with a primary focus on the domain of intelligent fault diagnosis. The importance of IFD methods in ensuring the smooth execution of industrial processes has become increasingly evident. However, most methods are formulated under the assumption of complete, balanced, and abundant data, which often does not align with real-world engineering scenarios. The difficulties linked to these classifications of IMHM have received noteworthy attention from the research community, leading to a substantial number of published papers on the topic. While there are existing comprehensive reviews that address major challenges and limitations in this field, there is still a gap in thoroughly investigating research perspectives across RUL prediction, edge-based architectures, and complete intelligent fault diagnosis processes. To fill this gap, we undertake a comprehensive survey that reviews and discusses research achievements in this domain, specifically focusing on IFD. Initially, we classify the existing IFD methods into three distinct perspectives: the method of processing data, which aims to optimize inputs for the intelligent fault diagnosis model and mitigate limitations in the training sample set; the method of constructing the model, which involves designing the structure and features of the model to enhance its resilience to challenges; and the method of optimizing training, which focuses on refining the training process for intelligent fault diagnosis models and emphasizes the importance of ideal data in the training process. Subsequently, the survey covers techniques related to RUL prediction and edge-cloud architectures for resource-constrained environments. Finally, this survey consolidates the outlook on relevant issues in IMHM, explores potential solutions, and offers practical recommendations for further consideration.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Ibrahem UM, Alblaihed MA, Altamimi AB, et al (2024)

Cloud computing practice activities and mental capacity on developing reproductive health and cognitive absorption.

African journal of reproductive health, 28(12):186-200.

The current study aims to determine how the interactions between practice (distributed/focused) and mental capacity (high/low) in the cloud-computing environment (CCE) affect the development of reproductive health skills and cognitive absorption. The study employed an experimental design, and it included a categorical variable for mental capacity (low/high) and an independent variable with two types of activities (distributed/focused). The research sample consisted of 240 students from the College of Science and College of Applied Medical Sciences at the University of Hail's. The sample was divided into four experimental groups. The study's most significant findings were the CCE's apparent favoritism of the group that studied using focused practice style and high mental capacity in the reproductive health skills test, as opposed to distributed practice style and low mental capacity in cognitive absorption. The findings will add to the ongoing debate over which of the two distributed/focused practice activity models is more effective in achieving desired educational results.

RevDate: 2025-01-08

Nur A, Demise A, Y Muanenda (2024)

Design and Evaluation of a Cloud Computing System for Real-Time Measurements in Polarization-Independent Long-Range DAS Based on Coherent Detection.

Sensors (Basel, Switzerland), 24(24):.

CloudSim is a versatile simulation framework for modeling cloud infrastructure components that supports customizable and extensible application provisioning strategies, allowing for the simulation of cloud services. On the other hand, Distributed Acoustic Sensing (DAS) is a ubiquitous technique used for measuring vibrations over an extended region. Data handling in DAS remains an open issue, as many applications need continuous monitoring of a volume of samples whose storage and processing in real time require high-capacity memory and computing resources. We employ the CloudSim tool to design and evaluate a cloud computing scheme for long-range, polarization-independent DAS using coherent detection of Rayleigh backscattering signals and uncover valuable insights on the evolution of the processing times for a diverse range of Virtual Machine (VM) capacities as well as sizes of blocks of processed data. Our analysis demonstrates that the choice of VM significantly impacts computational times in real-time measurements in long-range DAS and that achieving polarization independence introduces minimal processing overheads in the system. Additionally, the increase in the block size of processed samples per cycle results in diminishing increments in overall processing times per batch of new samples added, demonstrating the scalability of cloud computing schemes in long-range DAS and its capability to manage larger datasets efficiently.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Khabti J, AlAhmadi S, A Soudani (2024)

Enhancing Deep-Learning Classification for Remote Motor Imagery Rehabilitation Using Multi-Subject Transfer Learning in IoT Environment.

Sensors (Basel, Switzerland), 24(24):.

One of the most promising applications for electroencephalogram (EEG)-based brain-computer interfaces (BCIs) is motor rehabilitation through motor imagery (MI) tasks. However, current MI training requires physical attendance, while remote MI training can be applied anywhere, facilitating flexible rehabilitation. Providing remote MI training raises challenges to ensuring an accurate recognition of MI tasks by healthcare providers, in addition to managing computation and communication costs. The MI tasks are recognized through EEG signal processing and classification, which can drain sensor energy due to the complexity of the data and the presence of redundant information, often influenced by subject-dependent factors. To address these challenges, we propose in this paper a multi-subject transfer-learning approach for an efficient MI training framework in remote rehabilitation within an IoT environment. For efficient implementation, we propose an IoT architecture that includes cloud/edge computing as a solution to enhance the system's efficiency and reduce the use of network resources. Furthermore, deep-learning classification with and without channel selection is applied in the cloud, while multi-subject transfer-learning classification is utilized at the edge node. Various transfer-learning strategies, including different epochs, freezing layers, and data divisions, were employed to improve accuracy and efficiency. To validate this framework, we used the BCI IV 2a dataset, focusing on subjects 7, 8, and 9 as targets. The results demonstrated that our approach significantly enhanced the average accuracy in both multi-subject and single-subject transfer-learning classification. In three-subject transfer-learning classification, the FCNNA model achieved up to 79.77% accuracy without channel selection and 76.90% with channel selection. For two-subject and single-subject transfer learning, the application of transfer learning improved the average accuracy by up to 6.55% and 12.19%, respectively, compared to classification without transfer learning. This framework offers a promising solution for remote MI rehabilitation, providing both accurate task recognition and efficient resource usage.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Barthelemy J, Iqbal U, Qian Y, et al (2024)

Safety After Dark: A Privacy Compliant and Real-Time Edge Computing Intelligent Video Analytics for Safer Public Transportation.

Sensors (Basel, Switzerland), 24(24):.

Public transportation systems play a vital role in modern cities, but they face growing security challenges, particularly related to incidents of violence. Detecting and responding to violence in real time is crucial for ensuring passenger safety and the smooth operation of these transport networks. To address this issue, we propose an advanced artificial intelligence (AI) solution for identifying unsafe behaviours in public transport. The proposed approach employs deep learning action recognition models and utilises technologies like NVIDIA DeepStream SDK, Amazon Web Services (AWS) DirectConnect, local edge computing server, ONNXRuntime and MQTT to accelerate the end-to-end pipeline. The solution captures video streams from remote train stations closed circuit television (CCTV) networks, processes the data in the cloud, applies the action recognition model, and transmits the results to a live web application. A temporal pyramid network (TPN) action recognition model was trained on a newly curated video dataset mixing open-source resources and live simulated trials to identify the unsafe behaviours. The base model was able to achieve a validation accuracy of 93% when trained using open-source dataset samples and was improved to 97% when live simulated dataset was included during the training. The developed AI system was deployed at Wollongong Train Station (NSW, Australia) and showcased impressive accuracy in detecting violence incidents during an 8-week test period, achieving a reliable false-positive (FP) rate of 23%. While the AI correctly identified 30 true-positive incidents, there were 6 cases of false negatives (FNs) where violence incidents were missed during the rainy weather suggesting more data in the training dataset related to bad weather. The AI model's continuous retraining capability ensures its adaptability to various real-world scenarios, making it a valuable tool for enhancing safety and the overall passenger experience in public transport settings.

RevDate: 2025-01-08

Li L, Zhu L, W Li (2024)

Cloud-Edge-End Collaborative Federated Learning: Enhancing Model Accuracy and Privacy in Non-IID Environments.

Sensors (Basel, Switzerland), 24(24):.

Cloud-edge-end computing architecture is crucial for large-scale edge data processing and analysis. However, the diversity of terminal nodes and task complexity in this architecture often result in non-independent and identically distributed (non-IID) data, making it challenging to balance data heterogeneity and privacy protection. To address this, we propose a privacy-preserving federated learning method based on cloud-edge-end collaboration. Our method fully considers the three-tier architecture of cloud-edge-end systems and the non-IID nature of terminal node data. It enhances model accuracy while protecting the privacy of terminal node data. The proposed method groups terminal nodes based on the similarity of their data distributions and constructs edge subnetworks for training in collaboration with edge nodes, thereby mitigating the negative impact of non-IID data. Furthermore, we enhance WGAN-GP with attention mechanism to generate balanced synthetic data while preserving key patterns from original datasets, reducing the adverse effects of non-IID data on global model accuracy while preserving data privacy. In addition, we introduce data resampling and loss function weighting strategies to mitigate model bias caused by imbalanced data distribution. Experimental results on real-world datasets demonstrate that our proposed method significantly outperforms existing approaches in terms of model accuracy, F1-score, and other metrics.

RevDate: 2025-01-08
CmpDate: 2025-01-08

Cruz Castañeda WA, P Bertemes Filho (2024)

Improvement of an Edge-IoT Architecture Driven by Artificial Intelligence for Smart-Health Chronic Disease Management.

Sensors (Basel, Switzerland), 24(24):.

One of the health challenges in the 21st century is to rethink approaches to non-communicable disease prevention. A solution is a smart city that implements technology to make health smarter, enables healthcare access, and contributes to all residents' overall well-being. Thus, this paper proposes an architecture to deliver smart health. The architecture is anchored in the Internet of Things and edge computing, and it is driven by artificial intelligence to establish three foundational layers in smart care. Experimental results in a case study on glucose prediction noninvasively show that the architecture senses and acquires data that capture relevant characteristics. The study also establishes a baseline of twelve regression algorithms to assess the non-invasive glucose prediction performance regarding the mean squared error, root mean squared error, and r-squared score, and the catboost regressor outperforms the other models with 218.91 and 782.30 in MSE, 14.80 and 27.97 in RMSE, and 0.81 and 0.31 in R2, respectively, on training and test sets. Future research works involve extending the performance of the algorithms with new datasets, creating and optimizing embedded AI models, deploying edge-IoT with embedded AI for wearable devices, implementing an autonomous AI cloud engine, and implementing federated learning to deliver scalable smart health in a smart city context.

RevDate: 2025-01-08

Podgorelec D, Strnad D, Kolingerová I, et al (2024)

State-of-the-Art Trends in Data Compression: COMPROMISE Case Study.

Entropy (Basel, Switzerland), 26(12): pii:e26121032.

After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight. In this article, we try to critically evaluate the most prominent of these trends and to explore their parallels, complementarities, and differences. Digital data restoration mimics the human ability to omit memorising information that is satisfactorily retrievable from the context. Feature-based data compression introduces a two-level data representation with higher-level semantic features and with residuals that correct the feature-restored (predicted) data. The integration of the advantages of individual domain-specific data compression methods into a general approach is also challenging. To the best of our knowledge, a method that addresses all these trends does not exist yet. Our methodology, COMPROMISE, has been developed exactly to make as many solutions to these challenges as possible inter-operable. It incorporates features and digital restoration. Furthermore, it is largely domain-independent (general), asymmetric, and universal. The latter refers to the ability to compress data in a common framework in a lossy, lossless, and near-lossless mode. COMPROMISE may also be considered an umbrella that links many existing domain-dependent and independent methods, supports hybrid lossless-lossy techniques, and encourages the development of new data compression algorithms.

RevDate: 2025-01-06
CmpDate: 2025-01-07

Yang M, Zhu X, Yan F, et al (2025)

Digital-based emergency prevention and control system: enhancing infection control in psychiatric hospitals.

BMC medical informatics and decision making, 25(1):7.

BACKGROUND: The practical application of infectious disease emergency plans in mental health institutions during the ongoing pandemic has revealed significant shortcomings. These manifest as chaotic management of mental health care, a lack of hospital infection prevention and control (IPC) knowledge among medical staff, and unskilled practical operation. These factors result in suboptimal decision-making and emergency response execution. Consequently, we have developed a digital-based emergency prevention and control system to reinforce IPC management in psychiatric hospitals and enhance the hospital IPC capabilities of medical staff.

METHODS: The system incorporates modern technologies such as cloud computing, big data, streaming media, and knowledge graphs. A cloud service platform was established at the PaaS layer using Docker container technology to manage infectious disease emergency-related services. The system provides application services to various users through a Browser/Server Architecture. The system was implemented in a class A tertiary mental health center from March 1st, 2022, to February 28th, 2023. Twelve months of emergency IPC training and education were conducted based on the system. The system's functions and the users' IPC capabilities were evaluated.

RESULTS: A total of 116 employees participated in using the system. The system performance evaluation indicated that functionality (3.78 ± 0.68), practicality (4.02 ± 0.74), reliability (3.45 ± 0.50), efficiency (4.14 ± 0.69), accuracy (3.36 ± 0.58), and assessability (3.05 ± 0.47) met basic levels (> 3), with efficiency improvement and practicality achieving a good level (> 4). After 12 months of training and study based on the system, the participants demonstrated improved emergency knowledge (χ[2] = 37.69, p < 0.001) and skills (p < 0.001).

CONCLUSION: The findings of this study indicate that the digital-based emergency IPC system has the potential to enhance the emergency IPC knowledge base and operational skills of medical personnel in psychiatric hospitals. Furthermore, the medical personnel appear to be better adapted to the system. Consequently, the system has the capacity to facilitate the emergency IPC response of psychiatric institutions to infectious diseases, while simultaneously optimising the training and educational methodologies employed in emergency prevention and control. The promotion and application of this system in psychiatric institutions has the potential to accelerate the digitalisation and intelligence construction of psychiatric hospitals.

RevDate: 2025-01-07

Vandewinckele L, Benazzouz C, Delombaerde L, et al (2024)

Pro-active risk analysis of an in-house developed deep learning based autoplanning tool for breast Volumetric Modulated Arc Therapy.

Physics and imaging in radiation oncology, 32:100677.

BACKGROUND AND PURPOSE: With the increasing amount of in-house created deep learning models in radiotherapy, it is important to know how to minimise the risks associated with the local clinical implementation prior to clinical use. The goal of this study is to give an example of how to identify the risks and find mitigation strategies to reduce these risks in an implemented workflow containing a deep learning based planning tool for breast Volumetric Modulated Arc Therapy.

MATERIALS AND METHODS: The deep learning model ran on a private Google Cloud environment for adequate computational capacity and was integrated into a workflow that could be initiated within the clinical Treatment Planning System (TPS). A proactive Failure Mode and Effect Analysis (FMEA) was conducted by a multidisciplinary team, including physicians, physicists, dosimetrists, technologists, quality managers, and the research and development team. Failure modes categorised as 'Not acceptable' and 'Tolerable' on the risk matrix were further examined to find mitigation strategies.

RESULTS: In total, 39 failure modes were defined for the total workflow, divided over four steps. Of these, 33 were deemed 'Acceptable', five 'Tolerable', and one 'Not acceptable'. Mitigation strategies, such as a case-specific Quality Assurance report, additional scripted checks and properties, a pop-up window, and time stamp analysis, reduced the failure modes to two 'Tolerable' and none in the 'Not acceptable' region.

CONCLUSIONS: The pro-active risk analysis revealed possible risks in the implemented workflow and led to the implementation of mitigation strategies that decreased the risk scores for safer clinical use.

RevDate: 2025-01-06

Li S, Wan H, Yu Q, et al (2025)

Downscaling of ERA5 reanalysis land surface temperature based on attention mechanism and Google Earth Engine.

Scientific reports, 15(1):675.

Land Surface Temperature (LST) is widely recognized as a sensitive indicator of climate change, and it plays a significant role in ecological research. The ERA5-Land LST dataset, developed and managed by the European Centre for Medium-Range Weather Forecasts (ECMWF), is extensively used for global or regional LST studies. However, its fine-scale application is limited by its low spatial resolution. Therefore, to improve the spatial resolution of ERA5-Land LST data, this study proposes an Attention Mechanism U-Net (AMUN) method, which combines data acquisition and preprocessing on the Google Earth Engine (GEE) cloud computing platform, to downscale the hourly monthly mean reanalysis LST data of ERA5-Land across China's territory from 0.1° to 0.01°. This method comprehensively considers the relationship between the LST and surface features, organically combining multiple deep learning modules, includes the Global Multi-Factor Cross-Attention (GMFCA) module, the Feature Fusion Residual Dense Block (FFRDB) connection module, and the U-Net module. In addition, the Bayesian global optimization algorithm is used to select the optimal hyperparameters of the network in order to enhance the predictive performance of the model. Finally, the downscaling accuracy of the network was evaluated through simulated data experiments and real data experiments and compared with the Random Forest (RF) method. The results show that the network proposed in this study outperforms the RF method, with RMSE reduced by approximately 32-51%. The downscaling method proposed in this study can effectively improve the accuracy of ERA5-Land LST downscaling, providing new insights for LST downscaling research.

RevDate: 2025-01-04

Belbase P, Bhusal R, Ghimire SS, et al (2024)

Assuring assistance to healthcare and medicine: Internet of Things, Artificial Intelligence, and Artificial Intelligence of Things.

Frontiers in artificial intelligence, 7:1442254.

INTRODUCTION: The convergence of healthcare with the Internet of Things (IoT) and Artificial Intelligence (AI) is reshaping medical practice with promising enhanced data-driven insights, automated decision-making, and remote patient monitoring. It has the transformative potential of these technologies to revolutionize diagnosis, treatment, and patient care.

PURPOSE: This study aims to explore the integration of IoT and AI in healthcare, outlining their applications, benefits, challenges, and potential risks. By synthesizing existing literature, this study aims to provide insights into the current landscape of AI, IoT, and AIoT in healthcare, identify areas for future research and development, and establish a framework for the effective use of AI in health.

METHOD: A comprehensive literature review included indexed databases such as PubMed/Medline, Scopus, and Google Scholar. Key search terms related to IoT, AI, healthcare, and medicine were employed to identify relevant studies. Papers were screened based on their relevance to the specified themes, and eventually, a selected number of papers were methodically chosen for this review.

RESULTS: The integration of IoT and AI in healthcare offers significant advancements, including remote patient monitoring, personalized medicine, and operational efficiency. Wearable sensors, cloud-based data storage, and AI-driven algorithms enable real-time data collection, disease diagnosis, and treatment planning. However, challenges such as data privacy, algorithmic bias, and regulatory compliance must be addressed to ensure responsible deployment of these technologies.

CONCLUSION: Integrating IoT and AI in healthcare holds immense promise for improving patient outcomes and optimizing healthcare delivery. Despite challenges such as data privacy concerns and algorithmic biases, the transformative potential of these technologies cannot be overstated. Clear governance frameworks, transparent AI decision-making processes, and ethical considerations are essential to mitigate risks and harness the full benefits of IoT and AI in healthcare.

RevDate: 2025-01-01

Dommer J, Van Doorslaer K, Afrasiabi C, et al (2024)

PaVE 2.0: Behind the Scenes of the Papillomavirus Episteme.

Journal of molecular biology pii:S0022-2836(24)00555-2 [Epub ahead of print].

The Papilloma Virus Episteme (PaVE) https://pave.niaid.nih.gov/ was initiated by NIAID in 2008 to provide a highly curated bioinformatic and knowledge resource for the papillomavirus scientific community. It rapidly became the fundamental and core resource for papillomavirus researchers and clinicians worldwide. Over time, the software infrastructure became severely outdated. In PaVE 2.0, the underlying libraries and hosting platform have been completely upgraded and rebuilt using Amazon Web Services (AWS) tools and automated CI/CD (continuous integration and deployment) pipelines for deployment of the application and data (now in AWS S3 cloud storage). PaVE 2.0 is hosted on three AWS ECS (elastic container service) using the NIAID Operations & Engineering Branch's Monarch tech stack and terraform. A new Celery queue supports longer running tasks. The framework is Python Flask with a JavaScript/JINJA template front end, and the database switched from MySQL to Neo4j. A Swagger API (Application Programming Interface) performs database queries, and executes jobs for BLAST, MAFFT, and the L1 typing tooland will allow future programmatic data access. All major tools such as BLAST, the L1 typing tool, genome locus viewer, phylogenetic tree generator, multiple sequence alignment, and protein structure viewer were modernized and enhanced to support more users. Multiple sequence alignment uses MAFFT instead of COBALT. The protein structure viewer was changed from Jmol to Mol*, the new embeddable viewer used by RCSB (Research Collaboratory for Structural Bioinformatics). In summary, PaVE 2.0 allows us to continue to provide this essential resource with an open-source framework that could be used as a template for molecular biology databases of other viruses.

RevDate: 2025-01-04

Dugyala R, Chithaluru P, Ramchander M, et al (2024)

Secure cloud computing: leveraging GNN and leader K-means for intrusion detection optimization.

Scientific reports, 14(1):30906 pii:10.1038/s41598-024-81442-7.

Over the past two decades, cloud computing has experienced exponential growth, becoming a critical resource for organizations and individuals alike. However, this rapid adoption has introduced significant security challenges, particularly in intrusion detection, where traditional systems often struggle with low detection accuracy and high processing times. To address these limitations, this research proposes an optimized Intrusion Detection System (IDS) that leverages Graph Neural Networks and the Leader K-means clustering algorithm. The primary aim of the study is to enhance both the accuracy and efficiency of intrusion detection within cloud environments. Key contributions of this work include the integration of the Leader K-means algorithm for effective data clustering, improving the IDS's ability to differentiate between normal and malicious activities. Additionally, the study introduces an optimized Grasshopper Optimization algorithm, which enhances the performance of the Optimal Neural Network, further refining detection accuracy. For added data security, the system incorporates Advanced Encryption Standard encryption and steganography, ensuring robust protection of sensitive information. The proposed solution has been implemented on the Java platform with CloudSim support, and the findings demonstrate a significant improvement in both detection accuracy and processing efficiency compared to existing methods. This research presents a comprehensive solution to the ongoing security challenges in cloud computing, offering a valuable contribution to the field.

RevDate: 2025-01-04

Ahmad SZ, F Qamar (2024)

A hybrid AI based framework for enhancing security in satellite based IoT networks using high performance computing architecture.

Scientific reports, 14(1):30695.

IoT device security has become a major concern as a result of the rapid expansion of the Internet of Things (IoT) and the growing adoption of cloud computing for central monitoring and management. In order to provide centrally managed services each IoT device have to connect to their respective High-Performance Computing (HPC) clouds. The ever increasing deployment of Internet of Things (IoT) devices linked to HPC clouds use various medium such as wired and wireless. The security challenges increases further when these devices communicate over satellite links. This Satellite-Based IoT-HPC Cloud architecture poses new security concerns which exacerbates this problem. An intrusion detection technology integrated in the central cloud is suggested as a potential remedy to monitor and detect aberrant activity within the network in order to allay these worries. However, the enormous amounts of data generated by IoT devices and their constrained computing power dose not allow to implement IDS techniques at source and renders towards typical central Intrusion Detection Systems (IDS) ineffectiveness. Moreover, to protect these systems, powerful intrusion detection techniques are required due to the inherent vulnerabilities of IoT devices and the possible hazards during data transmission.During the course of literature survey it is revealed that the research work has been done to detect few types of attacks by using the old school model of IDS. The computational expensiveness in terms of processing time is also an important parameter to be considered. This work introduces a novel Embedded Hybrid Deep Learning-based intrusion detection technique (EHID) based on embedded hybrid deep learning that is created specifically for IoT devices linked to HPC clouds via satellite connectivity. Two Deep Learning (DL) algorithms are integrated in the proposed method to improve detection abilities with decent accuracy while considering the processing time and number of trainable parameters to detect 14 types of threats. It segregates among the normal and attack traffic. We also modify the conventional IDS approach and propose architectural change to harness the processing power of central server of cloud. This hybrid approach effectively detects threats by harnessing the computing power available at HPC cloud along with leveraging the power of AI. Additionally, the proposed system enables real-time monitoring and detection of intrusions while providing monitoring and management services through HPC using IoT-generated data. Experiments on Edge-IIoTset Cyber Security Dataset of IoT & IIoT indicate improved detection accuracy, reduced false positives, and efficient computational performance.

RevDate: 2024-12-27

Salcedo E (2024)

Computer Vision-Based Gait Recognition on the Edge: A Survey on Feature Representations, Models, and Architectures.

Journal of imaging, 10(12): pii:jimaging10120326.

Computer vision-based gait recognition (CVGR) is a technology that has gained considerable attention in recent years due to its non-invasive, unobtrusive, and difficult-to-conceal nature. Beyond its applications in biometrics, CVGR holds significant potential for healthcare and human-computer interaction. Current CVGR systems often transmit collected data to a cloud server for machine learning-based gait pattern recognition. While effective, this cloud-centric approach can result in increased system response times. Alternatively, the emerging paradigm of edge computing, which involves moving computational processes to local devices, offers the potential to reduce latency, enable real-time surveillance, and eliminate reliance on internet connectivity. Furthermore, recent advancements in low-cost, compact microcomputers capable of handling complex inference tasks (e.g., Jetson Nano Orin, Jetson Xavier NX, and Khadas VIM4) have created exciting opportunities for deploying CVGR systems at the edge. This paper reports the state of the art in gait data acquisition modalities, feature representations, models, and architectures for CVGR systems suitable for edge computing. Additionally, this paper addresses the general limitations and highlights new avenues for future research in the promising intersection of CVGR and edge computing.

RevDate: 2025-01-04

Chen J, Hoops S, Mortveit HS, et al (2025)

Epihiper-A high performance computational modeling framework to support epidemic science.

PNAS nexus, 4(1):pgae557.

This paper describes Epihiper, a state-of-the-art, high performance computational modeling framework for epidemic science. The Epihiper modeling framework supports custom disease models, and can simulate epidemics over dynamic, large-scale networks while supporting modulation of the epidemic evolution through a set of user-programmable interventions. The nodes and edges of the social-contact network have customizable sets of static and dynamic attributes which allow the user to specify intervention target sets at a very fine-grained level; these also permit the network to be updated in response to nonpharmaceutical interventions, such as school closures. The execution of interventions is governed by trigger conditions, which are Boolean expressions formed using any of Epihiper's primitives (e.g. the current time, transmissibility) and user-defined sets (e.g. people with work activities). Rich expressiveness, extensibility, and high-performance computing responsiveness were central design goals to ensure that the framework could effectively target realistic scenarios at the scale and detail required to support the large computational designs needed by state and federal public health policymakers in their efforts to plan and respond in the event of epidemics. The modeling framework has been used to support the CDC Scenario Modeling Hub for COVID-19 response, and was a part of a hybrid high-performance cloud system that was nominated as a finalist for the 2021 ACM Gordon Bell Special Prize for high performance computing-based COVID-19 Research.

RevDate: 2025-01-04
CmpDate: 2024-12-19

Blindenbach J, Kang J, Hong S, et al (2024)

SQUiD: ultra-secure storage and analysis of genetic data for the advancement of precision medicine.

Genome biology, 25(1):314.

Cloud computing allows storing the ever-growing genotype-phenotype datasets crucial for precision medicine. Due to the sensitive nature of this data and varied laws and regulations, additional security measures are needed to ensure data privacy. We develop SQUiD, a secure queryable database for storing and analyzing genotype-phenotype data. SQUiD allows storage and secure querying of data in a low-security, low-cost public cloud using homomorphic encryption in a multi-client setting. We demonstrate SQUiD's practical usability and scalability using synthetic and UK Biobank data.

RevDate: 2025-01-04
CmpDate: 2024-12-17

Ma'moun S, Farag R, Abutaleb K, et al (2024)

Habitat Suitability Modelling for the Red Dwarf Honeybee (Apis florea (Linnaeus)) and Its Distribution Prediction Using Machine Learning and Cloud Computing.

Neotropical entomology, 54(1):18.

Apis florea bees were recently identified in Egypt, marking the second occurrence of this species on the African continent. The objective of this study was to track the distribution of A. florea in Egypt and evaluate its potential for invasive behaviour. Field surveys were conducted over a 2-year period, resulting in the collection of data on the spatial distribution of the red dwarf honeybees. A comprehensive analysis was performed utilizing long-term monthly temperature and rainfall data to generate spatially interpolated climate surfaces with a 1-km resolution. Vegetation variables derived from Terra MODIS were also incorporated. Furthermore, elevation data obtained from the Shuttle Radar Topography Mission were utilized to derive slope, aspect, and hillshade based on the digital elevation model. The collected data were subject to resampling for optimal data smoothing. Subsequently, a random forest model was applied, followed by an accuracy assessment to evaluate the classification output. The results indicated the selection of the mean temperature of coldest quarter (bio11), annual mean temperature (bio01), and minimum temperature of coldest month (bio06) as temperature-derived parameters are the most important parameters. Annual precipitation (bio12) and precipitation of wettest quarter (bio16) as precipitation parameters, and non-tree vegetation parameter as well as the elevation. The calculation of the Habitat Suitability Index revealed that the most suitable areas, covering a total of 200131.9 km[2], were predominantly situated in the eastern and northern regions of Egypt, including the Nile Delta characterized by its fertile agricultural lands and the presence of the river Nile. In contrast, the western and southern parts exhibited low habitat suitability due to the absence of significant green vegetation and low relative humidity.

RevDate: 2025-01-04

Zhou J, Chen S, Kuang H, et al (2024)

Optimal robust configuration in cloud environment based on heuristic optimization algorithm.

PeerJ. Computer science, 10:e2350.

To analyze performance in cloud computing, some unpredictable perturbations that may lead to performance degradation are essential factors that should not be neglected. To prevent performance degradation in cloud computing systems, it is reasonable to measure the impact of the perturbations and propose a robust configuration strategy to maintain the performance of the system at an acceptable level. In this article, unlike previous research focusing on profit maximization and waiting time minimization, our study starts with the bottom line of expected performance degradation due to perturbation. The bottom line is quantified as the minimum acceptable profit and the maximum acceptable waiting time, and then the corresponding feasible region is defined. By comparing between the system's actual working performance and the bottom line, the concept of robustness is invoked as a guiding basis for configuring server size and speed in feasible regions, so that the performance of the cloud computing system can be maintained at an acceptable level when perturbed. Subsequently, to improve the robustness of the system as much as possible, discuss the robustness measurement method. A heuristic optimization algorithm is proposed and compared with other heuristic optimization algorithms to verify the performance of the algorithm. Experimental results show that the magnitude error of the solution of our algorithm compared with the most advanced benchmark scheme is on the order of 10[-6], indicating the accuracy of our solution.

RevDate: 2024-12-13

Mou T, Y Liu (2024)

Utilizing the cloud-based satellite platform to explore the dynamics of coastal aquaculture ponds from 1986 to 2020 in Shandong Province, China.

Marine pollution bulletin, 211:117414 pii:S0025-326X(24)01391-2 [Epub ahead of print].

Coastal pond aqua farming is critical in aquaculture and significantly contributes to the seafood supply. Meanwhile, the development of aquaculture ponds also threatens vulnerable wetland resources and coastal ecosystems. Accurate statistics regarding the distribution and variability of coastal pond aquaculture are crucial for balancing the sustainable development of coastal aquaculture and preserving the coastal environment and ecosystems. Satellite imagery offers a valuable tool for detecting spatial-temporal information related to these coastal ponds. Furthermore, integrating multiple remote sensing images to acquire comprehensive spatial information about the coastal ponds remains challenging. This study utilized a decision-tree classifier applied to Landsat data to detect the spatial distribution of coastal ponds in Shandong Province from 1986 to 2020, with data analyzed at five-year intervals, primarily based on the Google Earth Engine cloud platform. A pond map in 2020, extracted from Sentinel-2 imagery, was used as a reference map and combined with the results from Landsat data to explore the landscape changes of coastal ponds. The results indicated that Shandong Province's coastal pond area underwent significant expansion before 1990, followed by slower growth from 1990 to 2010 and eventual shrinkage after 2010. Specifically, the pond area expanded from 428.38 km[2] in 1986 to a peak of 2149.51 km[2] in 2010 before contracting to 2012.39 km[2] in 2020. The region near Bohai Bay emerged as the epicenter of Shandong's coastal aquaculture, encompassing 62 % of the total pond area in 2020. The government policies previously promoted the expansion of coastal pond farming but shifted to curbing the uncontrolled development of aquaculture ponds.

RevDate: 2024-12-13
CmpDate: 2024-12-13

Alipio K, García-Colón J, Boscarino N, et al (2025)

Indigenous Data Sovereignty, Circular Systems, and Solarpunk Solutions for a Sustainable Future.

Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 30:717-733.

Recent advancements in Artificial Intelligence (AI) and data center infrastructure have brought the global cloud computing market to the forefront of conversations about sustainability and energy use. Current policy and infrastructure for data centers prioritize economic gain and resource extraction, inherently unsustainable models which generate massive amounts of energy and heat waste. Our team proposes the formation of policy around earth-friendly computation practices rooted in Indigenous models of circular systems of sustainability. By looking to alternative systems of sustainability rooted in Indigenous values of aloha 'āina, or love for the land, we find examples of traditional ecological knowledge (TEK) that can be imagined alongside Solarpunk visions for a more sustainable future. One in which technology works with the environment, reusing electronic waste (e-waste) and improving data life cycles.

RevDate: 2024-12-13
CmpDate: 2024-12-13

Ramwala OA, Lowry KP, Hippe DS, et al (2025)

ClinValAI: A framework for developing Cloud-based infrastructures for the External Clinical Validation of AI in Medical Imaging.

Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 30:215-228.

Artificial Intelligence (AI) algorithms showcase the potential to steer a paradigm shift in clinical medicine, especially medical imaging. Concerns associated with model generalizability and biases necessitate rigorous external validation of AI algorithms prior to their adoption into clinical workflows. To address the barriers associated with patient privacy, intellectual property, and diverse model requirements, we introduce ClinValAI, a framework for establishing robust cloud-based infrastructures to clinically validate AI algorithms in medical imaging. By featuring dedicated workflows for data ingestion, algorithm scoring, and output processing, we propose an easily customizable method to assess AI models and investigate biases. Our novel orchestration mechanism facilitates utilizing the complete potential of the cloud computing environment. ClinValAI's input auditing and standardization mechanisms ensure that inputs consistent with model prerequisites are provided to the algorithm for a streamlined validation. The scoring workflow comprises multiple steps to facilitate consistent inferencing and systematic troubleshooting. The output processing workflow helps identify and analyze samples with missing results and aggregates final outputs for downstream analysis. We demonstrate the usability of our work by evaluating a state-of-the-art breast cancer risk prediction algorithm on a large and diverse dataset of 2D screening mammograms. We perform comprehensive statistical analysis to study model calibration and evaluate performance on important factors, including breast density, age, and race, to identify latent biases. ClinValAI provides a holistic framework to validate medical imaging models and has the potential to advance the development of generalizable AI models in clinical medicine and promote health equity.

RevDate: 2024-12-15
CmpDate: 2024-12-13

Anderson W, Bhatnagar R, Scollick K, et al (2024)

Real-world evidence in the cloud: Tutorial on developing an end-to-end data and analytics pipeline using Amazon Web Services resources.

Clinical and translational science, 17(12):e70078.

In the rapidly evolving landscape of healthcare and drug development, the ability to efficiently collect, process, and analyze large volumes of real-world data (RWD) is critical for advancing drug development. This article provides a blueprint for establishing an end-to-end data and analytics pipeline in a cloud-based environment. The pipeline presented here includes four major components, including data ingestion, transformation, visualization, and analytics, each supported by a suite of Amazon Web Services (AWS) tools. The pipeline is exemplified through the CURE ID platform, a collaborative tool designed to capture and analyze real-world, off-label treatment administrations. By using services such as AWS Lambda, Amazon Relational Database Service (RDS), Amazon QuickSight, and Amazon SageMaker, the pipeline facilitates the ingestion of diverse data sources, the transformation of raw data into structured formats, the creation of interactive dashboards for data visualization, and the application of advanced machine learning models for data analytics. The described architecture not only supports the needs of the CURE ID platform, but also offers a scalable and adaptable framework that can be applied across various domains to enhance data-driven decision making beyond drug repurposing.

RevDate: 2024-12-14

Bao H, Yuan M, Deng H, et al (2024)

Secure multiparty computation protocol based on homomorphic encryption and its application in blockchain.

Heliyon, 10(14):e34458.

Blockchain technology is a key technology in the current information field and has been widely used in various industries. Blockchain technology faces significant challenges in privacy protection while ensuring data immutability and transparency, so it is crucial to implement private computing in blockchain. To target the privacy issues in blockchain, we design a secure multi-party computation (SMPC) protocol DHSMPC based on homomorphic encryption in this paper. On the one hand, homomorphic encryption technology can directly operate on ciphertext, solving the privacy problem in the blockchain. On the other hand, this paper designs the directed decryption function of DHSMPC to resist malicious opponents in the CRS model, so that authorized users who do not participate in the calculation can also access the decryption results of secure multi-party computation. Analytical and experimental results show that DHSMPC has smaller ciphertext size and stronger performance than existing SMPC protocols. The protocol makes it possible to implement complex calculations in multi-party scenarios and is proven to be resistant to various semi-malicious attacks, ensuring data security and privacy. Finally, this article combines the designed DHSMPC protocol with blockchain and cloud computing, showing how to use this solution to achieve trusted data management in specific scenarios.

RevDate: 2025-01-04
CmpDate: 2024-12-13

Oh S, Gravel-Pucillo K, Ramos M, et al (2024)

AnVILWorkflow: A runnable workflow package for Cloud-implemented bioinformatics analysis pipelines.

F1000Research, 13:1257.

Advancements in sequencing technologies and the development of new data collection methods produce large volumes of biological data. The Genomic Data Science Analysis, Visualization, and Informatics Lab-space (AnVIL) provides a cloud-based platform for democratizing access to large-scale genomics data and analysis tools. However, utilizing the full capabilities of AnVIL can be challenging for researchers without extensive bioinformatics expertise, especially for executing complex workflows. We present the AnVILWorkflow R package, which enables the convenient execution of bioinformatics workflows hosted on AnVIL directly from an R environment. AnVILWorkflow simplifies the setup of the cloud computing environment, input data formatting, workflow submission, and retrieval of results through intuitive functions. We demonstrate the utility of AnVILWorkflow for three use cases: bulk RNA-seq analysis with Salmon, metagenomics analysis with bioBakery, and digital pathology image processing with PathML. The key features of AnVILWorkflow include user-friendly browsing of available data and workflows, seamless integration of R and non-R tools within a reproducible analysis pipeline, and accessibility to scalable computing resources without direct management overhead. AnVILWorkflow lowers the barrier to utilizing AnVIL's resources, especially for exploratory analyses or bulk processing with established workflows. This empowers a broader community of researchers to leverage the latest genomics tools and datasets using familiar R syntax. This package is distributed through the Bioconductor project (https://bioconductor.org/packages/AnVILWorkflow), and the source code is available through GitHub (https://github.com/shbrief/AnVILWorkflow).

RevDate: 2024-12-14
CmpDate: 2024-12-12

Bano S, Abbas G, Bilal M, et al (2024)

PHyPO: Priority-based Hybrid task Partitioning and Offloading in mobile computing using automated machine learning.

PloS one, 19(12):e0314198.

With the increasing demand for mobile computing, the requirement for intelligent resource management has also increased. Cloud computing lessens the energy consumption of user equipment, but it increases the latency of the system. Whereas edge computing reduces the latency along with the energy consumption, it has limited resources and cannot process bigger tasks. To resolve these issues, a Priority-based Hybrid task Partitioning and Offloading (PHyPO) scheme is introduced in this paper, which prioritizes the tasks with high time sensitivity and offloads them intelligently. It also calculates the optimal number of partitions a task can be divided into. The utility of resources is maximized along with increasing the processing capability of the model by using a hybrid architecture, consisting of mobile devices, edge servers, and cloud servers. Automated machine learning is used to identify the optimal classification models, along with tuning their hyper-parameters, which results in adaptive boosting ensemble learning-based models to reduce the time complexity of the system to O(1). The results of the proposed algorithm show a significant improvement over benchmark techniques along with achieving an accuracy of 96.1% for the optimal partitioning model and 94.3% for the optimal offloading model, with both the results being achieved in significantly less or equal time as compared to the benchmark techniques.

RevDate: 2024-12-13

Katapally TR (2024)

It's late, but not too late to transform health systems: a global digital citizen science observatory for local solutions to global problems.

Frontiers in digital health, 6:1399992.

A key challenge in monitoring, managing, and mitigating global health crises is the need to coordinate clinical decision-making with systems outside of healthcare. In the 21st century, human engagement with Internet-connected ubiquitous devices generates an enormous amount of big data, which can be used to address complex, intersectoral problems via participatory epidemiology and mHealth approaches that can be operationalized with digital citizen science. These big data - which traditionally exist outside of health systems - are underutilized even though their usage can have significant implications for prediction and prevention of communicable and non-communicable diseases. To address critical challenges and gaps in big data utilization across sectors, a Digital Citizen Science Observatory (DiScO) is being developed by the Digital Epidemiology and Population Health Laboratory by scaling up existing digital health infrastructure. DiScO's development is informed by the Smart Framework, which leverages ubiquitous devices for ethical surveillance. The Observatory will be operationalized by implementing a rapidly adaptable, replicable, and scalable progressive web application that repurposes jurisdiction-specific cloud infrastructure to address crises across jurisdictions. The Observatory is designed to be highly adaptable for both rapid data collection as well as rapid responses to emerging and existing crises. Data sovereignty and decentralization of technology are core aspects of the observatory, where citizens can own the data they generate, and researchers and decision-makers can re-purpose digital health infrastructure. The ultimate aim of DiScO is to transform health systems by breaking existing jurisdictional silos in addressing global health crises.

RevDate: 2024-12-14

Parente L, Sloat L, Mesquita V, et al (2024)

Annual 30-m maps of global grassland class and extent (2000-2022) based on spatiotemporal Machine Learning.

Scientific data, 11(1):1303.

The paper describes the production and evaluation of global grassland extent mapped annually for 2000-2022 at 30 m spatial resolution. The dataset showing the spatiotemporal distribution of cultivated and natural/semi-natural grassland classes was produced by using GLAD Landsat ARD-2 image archive, accompanied by climatic, landform and proximity covariates, spatiotemporal machine learning (per-class Random Forest) and over 2.3 M reference samples (visually interpreted in Very High Resolution imagery). Custom probability thresholds (based on five-fold spatial cross-validation) were used to derive dominant class maps with balanced user's and producer's accuracy, resulting in f1 score of 0.64 and 0.75 for cultivated and natural/semi-natural grassland, respectively. The produced maps (about 4 TB in size) are available under an open data license as Cloud-Optimized GeoTIFFs and as Google Earth Engine assets. The suggested uses of data include (1) integration with other compatible land cover products and (2) tracking the intensity and drivers of conversion of land to cultivated grasslands and from natural / semi-natural grasslands into other land use systems.

RevDate: 2025-01-04
CmpDate: 2024-12-17

Truong V, Moore JE, Ricoy UM, et al (2024)

Low-Cost Approaches in Neuroscience to Teach Machine Learning Using a Cockroach Model.

eNeuro, 11(12):.

In an effort to increase access to neuroscience education in underserved communities, we created an educational program that utilizes a simple task to measure place preference of the cockroach (Gromphadorhina portentosa) and the open-source free software, SLEAP Estimates Animal Poses (SLEAP) to quantify behavior. Cockroaches (n = 18) were trained to explore a linear track for 2 min while exposed to either air, vapor, or vapor with nicotine from a port on one side of the linear track over 14 d. The time the animal took to reach the port was measured, along with distance traveled, time spent in each zone, and velocity. As characterizing behavior is challenging and inaccessible for nonexperts new to behavioral research, we created an educational program using the machine learning algorithm, SLEAP, and cloud-based (i.e., Google Colab) low-cost platforms for data analysis. We found that SLEAP was within a 0.5% margin of error when compared with manually scoring the data. Cockroaches were found to have an increased aversive response to vapor alone compared with those that only received air. Using SLEAP, we demonstrate that the x-y coordinate data can be further classified into behavior using dimensionality-reducing clustering methods. This suggests that the linear track can be used to examine nicotine preference for the cockroach, and SLEAP can provide a fast, efficient way to analyze animal behavior. Moreover, this educational program is available for free for students to learn a complex machine learning algorithm without expensive hardware to study animal behavior.

RevDate: 2024-12-11
CmpDate: 2024-12-09

Consoli D, Parente L, Simoes R, et al (2024)

A computational framework for processing time-series of earth observation data based on discrete convolution: global-scale historical Landsat cloud-free aggregates at 30 m spatial resolution.

PeerJ, 12:e18585.

Processing large collections of earth observation (EO) time-series, often petabyte-sized, such as NASA's Landsat and ESA's Sentinel missions, can be computationally prohibitive and costly. Despite their name, even the Analysis Ready Data (ARD) versions of such collections can rarely be used as direct input for modeling because of cloud presence and/or prohibitive storage size. Existing solutions for readily using these data are not openly available, are poor in performance, or lack flexibility. Addressing this issue, we developed TSIRF (Time-Series Iteration-free Reconstruction Framework), a computational framework that can be used to apply diverse time-series processing tasks, such as temporal aggregation and time-series reconstruction by simply adjusting the convolution kernel. As the first large-scale application, TSIRF was employed to process the entire Global Land Analysis and Discovery (GLAD) ARD Landsat archive, producing a cloud-free bi-monthly aggregated product. This process, covering seven Landsat bands globally from 1997 to 2022, with more than two trillion pixels and for each one a time-series of 156 samples in the aggregated product, required approximately 28 hours of computation using 1248 Intel[®] Xeon[®] Gold 6248R CPUs. The quality of the result was assessed using a benchmark dataset derived from the aggregated product and comparing different imputation strategies. The resulting reconstructed images can be used as input for machine learning models or to map biophysical indices. To further limit the storage size the produced data was saved as 8-bit Cloud-Optimized GeoTIFFs (COG). With the hosting of about 20 TB per band/index for an entire 30 m resolution bi-monthly historical time-series distributed as open data, the product enables seamless, fast, and affordable access to the Landsat archive for environmental monitoring and analysis applications.

RevDate: 2024-12-11

Chen H, F Al-Turjman (2024)

Cloud-based configurable data stream processing architecture in rural economic development.

PeerJ. Computer science, 10:e2547.

PURPOSE: This study aims to address the limitations of traditional data processing methods in predicting agricultural product prices, which is essential for advancing rural informatization to enhance agricultural efficiency and support rural economic growth.

METHODOLOGY: The RL-CNN-GRU framework combines reinforcement learning (RL), convolutional neural network (CNN), and gated recurrent unit (GRU) to improve agricultural price predictions using multidimensional time series data, including historical prices, weather, soil conditions, and other influencing factors. Initially, the model employs a 1D-CNN for feature extraction, followed by GRUs to capture temporal patterns in the data. Reinforcement learning further optimizes the model, enhancing the analysis and accuracy of multidimensional data inputs for more reliable price predictions.

RESULTS: Testing on public and proprietary datasets shows that the RL-CNN-GRU framework significantly outperforms traditional models in predicting prices, with lower mean squared error (MSE) and mean absolute error (MAE) metrics.

CONCLUSION: The RL-CNN-GRU framework contributes to rural informatization by offering a more accurate prediction tool, thereby supporting improved decision-making in agricultural processes and fostering rural economic development.

RevDate: 2024-12-11

Ur Rehman A, Lu S, Ashraf MA, et al (2024)

The role of Internet of Things (IoT) technology in modern cultivation for the implementation of greenhouses.

PeerJ. Computer science, 10:e2309.

In recent years, the Internet of Things (IoT) has become one of the most familiar names creating a benchmark and scaling new heights. IoT an indeed future of the communication that has transformed the objects (things) of the real world into smarter devices. With the advent of IoT technology, this decade is witnessing a transformation from traditional agriculture approaches to the most advanced ones. Limited research has been carried out in this direction. Thus, herein we present various technological aspects involved in IoT-based cultivation. The role and the key components of smart farming using IoT were examined, with a focus on network technologies, including layers, protocols, topologies, network architecture, etc. We also delve into the integration of relevant technologies such as cloud computing, big data analytics, and the integration of IoT-based cultivation. We explored various security issues in modern IoT cultivation and also emphasized the importance of safeguarding sensitive agricultural data. Additionally, a comprehensive list of applications based on sensors and mobile devices is provided, offering refined solutions for greenhouse management. The principles and regulations established by different countries for IoT-based cultivation systems are presented, demonstrating the global recognition of these technologies. Furthermore, a selection of successful use cases and real-world scenarios and applications were presented. Finally, the open research challenges and solutions in modern IoT-based cultivation were discussed.

RevDate: 2024-12-11

Akram A, Anjum F, Latif S, et al (2024)

Honey bee inspired resource allocation scheme for IoT-driven smart healthcare applications in fog-cloud paradigm.

PeerJ. Computer science, 10:e2484.

The Internet of Things (IoT) paradigm is a foundational and integral factor for the development of smart applications in different sectors. These applications are comprised over set of interconnected modules that exchange data and realize the distributed data flow (DDF) model. The execution of these modules on distant cloud data-center is prone to quality of service (QoS) degradation. This is where fog computing philosophy comes in to bridge this gap and bring the computation closer to the IoT devices. However, resource management in fog and optimal allocation of fog devices to application modules is critical for better resource utilization and achieve QoS. Significant challenge in this regard is to manage the fog network dynamically to determine cost effective placement of application modules on resources. In this study, we propose the optimal placement strategy for smart health-care application modules on fog resources. The objective of this strategy is to ensure optimal execution in terms of latency, bandwidth and earliest completion time as compared to few baseline techniques. A honey bee inspired strategy has been proposed for allocation and utilization of the resource for application module processing. In order to model the application and measure the effectiveness of our strategy, iFogSim Java-based simulation classes have been extended and conduct the experiments that demonstrate the satisfactory results.

RevDate: 2024-12-11

Balaji P, Cengiz K, Babu S, et al (2024)

Metaheuristic optimized complex-valued dilated recurrent neural network for attack detection in internet of vehicular communications.

PeerJ. Computer science, 10:e2366.

The Internet of Vehicles (IoV) is a specialized iteration of the Internet of Things (IoT) tailored to facilitate communication and connectivity among vehicles and their environment. It harnesses the power of advanced technologies such as cloud computing, wireless communication, and data analytics to seamlessly exchange real-time data among vehicles, road-side infrastructure, traffic management systems, and other entities. The primary objectives of this real-time data exchange include enhancing road safety, reducing traffic congestion, boosting traffic flow efficiency, and enriching the driving experience. Through the IoV, vehicles can share information about traffic conditions, weather forecasts, road hazards, and other relevant data, fostering smarter, safer, and more efficient transportation networks. Developing, implementing and maintaining sophisticated techniques for detecting attacks present significant challenges and costs, which might limit their deployment, especially in smaller settings or those with constrained resources. To overcome these drawbacks, this article outlines developing an innovative attack detection model for the IoV using advanced deep learning techniques. The model aims to enhance security in vehicular networks by efficiently identifying attacks. Initially, data is collected from online databases and subjected to an optimal feature extraction process. During this phase, the Enhanced Exploitation in Hybrid Leader-based Optimization (EEHLO) method is employed to select the optimal features. These features are utilized by a Complex-Valued Dilated Recurrent Neural Network (CV-DRNN) to detect attacks within vehicle networks accurately. The performance of this novel attack detection model is rigorously evaluated and compared with that of traditional models using a variety of metrics.

RevDate: 2024-12-07

Ojha S, Paygude P, Dhumane A, et al (2024)

A method to enhance privacy preservation in cloud storage through a three-layer scheme for computational intelligence in fog computing.

MethodsX, 13:103053.

Recent advancements in cloud computing have heightened concerns about data control and privacy due to vulnerabilities in traditional encryption methods, which may not withstand internal attacks from cloud servers. To overcome these issues about the data privacy and control of transfer on cloud, a novel three-tier storage model incorporating fog computing method has been proposed. This framework leverages the advantages of cloud storage while enhancing data privacy. The approach uses the Hash-Solomon code algorithm to partition data into distinct segments, distributing a portion of it across local machines and fog servers, in addition to cloud storage. This distribution not only increases data privacy but also optimises storage efficiency. Computational intelligence plays a crucial role by calculating the optimal data distribution across cloud, fog, and local servers, ensuring balanced and secure data storage.•Experimental analysis of this mathematical mode has demonstrated a significant improvement in storage efficiency, with increases ranging from 30 % to 40 % as the volume of data blocks grows.•This innovative framework based on Hash Solomon code method effectively addresses privacy concerns while maintaining the benefits of cloud computing, offering a robust solution for secure and efficient data management.

RevDate: 2024-12-08
CmpDate: 2024-12-05

Valderrama-Landeros L, Troche-Souza C, Alcántara-Maya JA, et al (2024)

An assessment of mangrove forest in northwestern Mexico using the Google Earth Engine cloud computing platform.

PloS one, 19(12):e0315181.

Mangrove forests are commonly mapped using spaceborne remote sensing data due to the challenges of field endeavors in such harsh environments. However, these methods usually require a substantial level of manual processing for each image. Hence, conservation practitioners prioritize using cloud computing platforms to obtain accurate canopy classifications of large extensions of mangrove forests. The objective of this study was to analyze the spatial distribution and rate of change (area gain and loss) of the red mangrove (Rhizophora mangle) and other dominant mangrove species, mainly Avicennia germinans and Laguncularia racemosa, between 2015 and 2020 throughout the northwestern coast of Mexico. Bimonthly data of the Combined Mangrove Recognition Index (CMRI) from all available Sentinel-2 data were processed with the Google Earth Engine cloud computing platform. The results indicated an extension of 42865 ha of red mangrove and 139602 ha of other dominant mangrove species in the Gulf of California and the Pacific northwestern coast of Mexico for 2020. The mangrove extension experienced a notable decline of 1817 ha from 2015 to 2020, largely attributed to the expansion of aquaculture ponds and the destructive effects of hurricanes. Considering the two mangrove classes, the overall classification accuracies were 90% and 92% for the 2015 and 2020 maps, respectively. The advantages of the method compared to supervised classifications and traditional vegetation indices are discussed, as are the disadvantages concerning the spatial resolution and the minimum detection area. The work is a national effort to assist in decision-making to prioritize resource allocations for blue carbon, rehabilitation, and climate change mitigation programs.

RevDate: 2024-12-07
CmpDate: 2024-12-05

Khan M, Chao W, Rahim M, et al (2024)

Enhancing green supplier selection: A nonlinear programming method with TOPSIS in cubic Pythagorean fuzzy contexts.

PloS one, 19(12):e0310956.

The advancements in information and communication technologies have given rise to innovative developments such as cloud computing, the Internet of Things, big data analytics, and artificial intelligence. These technologies have been integrated into production systems, transforming them into intelligent systems and significantly impacting the supplier selection process. In recent years, the integration of these cutting-edge technologies with traditional and environmentally conscious criteria has gained considerable attention in supplier selection. This paper introduces a novel Nonlinear Programming (NLP) approach that utilizes the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method to identify the most suitable green supplier within cubic Pythagorean fuzzy (CPF) environments. Unlike existing methods that use either interval-valued PFS (IVPFS) or Pythagorean fuzzy sets (PFS) to represent information, our approach employs cubic Pythagorean fuzzy sets (CPFS), effectively addressing both IVPFS and PFS simultaneously. The proposed NLP models leverage interval weights, relative closeness coefficients (RCC), and weighted distance measurements to tackle complex decision-making problems. To illustrate the accuracy and effectiveness of the proposed selection methodology, we present a real-world case study related to green supplier selection.

RevDate: 2025-01-10
CmpDate: 2025-01-10

Corrêa Veríssimo G, Salgado Ferreira R, V Gonçalves Maltarollo (2025)

Ultra-Large Virtual Screening: Definition, Recent Advances, and Challenges in Drug Design.

Molecular informatics, 44(1):e202400305.

Virtual screening (VS) in drug design employs computational methodologies to systematically rank molecules from a virtual compound library based on predicted features related to their biological activities or chemical properties. The recent expansion in commercially accessible compound libraries and the advancements in artificial intelligence (AI) and computational power - including enhanced central processing units (CPUs), graphics processing units (GPUs), high-performance computing (HPC), and cloud computing - have significantly expanded our capacity to screen libraries containing over 10[9] molecules. Herein, we review the concept of ultra-large virtual screening (ULVS), focusing on the various algorithms and methodologies employed for virtual screening at this scale. In this context, we present the software utilized, applications, and results of different approaches, such as brute force docking, reaction-based docking approaches, machine learning (ML) strategies applied to docking or other VS methods, and similarity/pharmacophore search-based techniques. These examples represent a paradigm shift in the drug discovery process, demonstrating not only the feasibility of billion-scale compound screening but also their potential to identify hit candidates and increase the structural diversity of novel compounds with biological activities.

RevDate: 2024-12-07
CmpDate: 2024-12-05

Prasad VK, Verma A, Bhattacharya P, et al (2024)

Revolutionizing healthcare: a comparative insight into deep learning's role in medical imaging.

Scientific reports, 14(1):30273.

Recently, Deep Learning (DL) models have shown promising accuracy in analysis of medical images. Alzeheimer Disease (AD), a prevalent form of dementia, uses Magnetic Resonance Imaging (MRI) scans, which is then analysed via DL models. To address the model computational constraints, Cloud Computing (CC) is integrated to operate with the DL models. Recent articles on DL-based MRI have not discussed datasets specific to different diseases, which makes it difficult to build the specific DL model. Thus, the article systematically explores a tutorial approach, where we first discuss a classification taxonomy of medical imaging datasets. Next, we present a case-study on AD MRI classification using the DL methods. We analyse three distinct models-Convolutional Neural Networks (CNN), Visual Geometry Group 16 (VGG-16), and an ensemble approach-for classification and predictive outcomes. In addition, we designed a novel framework that offers insight into how various layers interact with the dataset. Our architecture comprises an input layer, a cloud-based layer responsible for preprocessing and model execution, and a diagnostic layer that issues alerts after successful classification and prediction. According to our simulations, CNN outperformed other models with a test accuracy of 99.285%, followed by VGG-16 with 85.113%, while the ensemble model lagged with a disappointing test accuracy of 79.192%. Our cloud Computing framework serves as an efficient mechanism for medical image processing while safeguarding patient confidentiality and data privacy.

RevDate: 2024-12-05

Tang H, Kong L, Fang Z, et al (2024)

Sustainable and smart rail transit based on advanced self-powered sensing technology.

iScience, 27(12):111306.

As rail transit continues to develop, expanding railway networks increase the demand for sustainable energy supply and intelligent infrastructure management. In recent years, advanced rail self-powered technology has rapidly progressed toward artificial intelligence and the internet of things (AIoT). This review primarily discusses the self-powered and self-sensing systems in rail transit, analyzing their current characteristics and innovative potentials in different scenarios. Based on this analysis, we further explore an IoT framework supported by sustainable self-powered sensing systems including device nodes, network communication, and platform deployment. Additionally, technologies about cloud computing and edge computing deployed in railway IoT enable more effective utilization. The deployed intelligent algorithms such as machine learning (ML) and deep learning (DL) can provide comprehensive monitoring, management, and maintenance in railway environments. Furthermore, this study explores research in other cross-disciplinary fields to investigate the potential of emerging technologies and analyze the trends for future development in rail transit.

RevDate: 2024-12-05
CmpDate: 2024-12-03

Asim Shahid M, Alam MM, M Mohd Su'ud (2024)

A fact based analysis of decision trees for improving reliability in cloud computing.

PloS one, 19(12):e0311089.

The popularity of cloud computing (CC) has increased significantly in recent years due to its cost-effectiveness and simplified resource allocation. Owing to the exponential rise of cloud computing in the past decade, many corporations and businesses have moved to the cloud to ensure accessibility, scalability, and transparency. The proposed research involves comparing the accuracy and fault prediction of five machine learning algorithms: AdaBoostM1, Bagging, Decision Tree (J48), Deep Learning (Dl4jMLP), and Naive Bayes Tree (NB Tree). The results from secondary data analysis indicate that the Central Processing Unit CPU-Mem Multi classifier has the highest accuracy percentage and the least amount of fault prediction. This holds for the Decision Tree (J48) classifier with an accuracy rate of 89.71% for 80/20, 90.28% for 70/30, and 92.82% for 10-fold cross-validation. Additionally, the Hard Disk Drive HDD-Mono classifier has an accuracy rate of 90.35% for 80/20, 92.35% for 70/30, and 90.49% for 10-fold cross-validation. The AdaBoostM1 classifier was found to have the highest accuracy percentage and the least amount of fault prediction for the HDD Multi classifier with an accuracy rate of 93.63% for 80/20, 90.09% for 70/30, and 88.92% for 10-fold cross-validation. Finally, the CPU-Mem Mono classifier has an accuracy rate of 77.87% for 80/20, 77.01% for 70/30, and 77.06% for 10-fold cross-validation. Based on the primary data results, the Naive Bayes Tree (NB Tree) classifier is found to have the highest accuracy rate with less fault prediction of 97.05% for 80/20, 96.09% for 70/30, and 96.78% for 10 folds cross-validation. However, the algorithm complexity is not good, taking 1.01 seconds. On the other hand, the Decision Tree (J48) has the second-highest accuracy rate of 96.78%, 95.95%, and 96.78% for 80/20, 70/30, and 10-fold cross-validation, respectively. J48 also has less fault prediction but with a good algorithm complexity of 0.11 seconds. The difference in accuracy and less fault prediction between NB Tree and J48 is only 0.9%, but the difference in time complexity is 9 seconds. Based on the results, we have decided to make modifications to the Decision Tree (J48) algorithm. This method has been proposed as it offers the highest accuracy and less fault prediction errors, with 97.05% accuracy for the 80/20 split, 96.42% for the 70/30 split, and 97.07% for the 10-fold cross-validation.

RevDate: 2024-12-07
CmpDate: 2024-12-03

Hegde A, Vijaysenan D, Mandava P, et al (2024)

The use of cloud based machine learning to predict outcome in intracerebral haemorrhage without explicit programming expertise.

Neurosurgical review, 47(1):883.

Machine Learning (ML) techniques require novel computer programming skills along with clinical domain knowledge to produce a useful model. We demonstrate the use of a cloud-based ML tool that does not require any programming expertise to develop, validate and deploy a prognostic model for Intracerebral Haemorrhage (ICH). The data of patients admitted with Spontaneous Intracerebral haemorrhage from January 2015 to December 2019 was accessed from our prospectively maintained hospital stroke registry. 80% of the dataset was used for training, 10% for validation, and 10% for testing. Seventeen input variables were used to predict the dichotomized outcomes (Good outcome mRS 0-3/ Bad outcome mRS 4-6), using machine learning (ML) and logistic regression (LR) models. The two different approaches were evaluated using Area Under the Curve (AUC) for Receiver Operating Characteristic (ROC), Precision recall and accuracy. Our data set comprised of a cohort of 1000 patients. The data was split 8:1 for training & testing respectively. The AUC ROC of the ML model was 0.86 with an accuracy of 75.7%. With LR AUC ROC was 0.74 with an accuracy of 73.8%. Feature importance chart showed that Glasgow coma score (GCS) at presentation had the highest relative importance, followed by hematoma volume and age in both approaches. Machine learning models perform better when compared to logistic regression. Models can be developed by clinicians possessing domain expertise and no programming experience using cloud based tools. The models so developed lend themselves to be incorporated into clinical workflow.

RevDate: 2024-12-05

Bhakhar R, RS Chhillar (2024)

Dynamic multi-criteria scheduling algorithm for smart home tasks in fog-cloud IoT systems.

Scientific reports, 14(1):29957.

The proliferation of Internet of Things (IoT) devices in smart homes has created a demand for efficient computational task management across complex networks. This paper introduces the Dynamic Multi-Criteria Scheduling (DMCS) algorithm, designed to enhance task scheduling in fog-cloud computing environments for smart home applications. DMCS dynamically allocates tasks based on criteria such as computational complexity, urgency, and data size, ensuring that time-sensitive tasks are processed swiftly on fog nodes while resource-intensive computations are handled by cloud data centers. The implementation of DMCS demonstrates significant improvements over conventional scheduling algorithms, reducing makespan, operational costs, and energy consumption. By effectively balancing immediate and delayed task execution, DMCS enhances system responsiveness and overall computational efficiency in smart home environments. However, DMCS also faces limitations, including computational overhead and scalability issues in larger networks. Future research will focus on integrating advanced machine learning algorithms to refine task classification, enhancing security measures, and expanding the framework's applicability to various computing environments. Ultimately, DMCS aims to provide a robust and adaptive scheduling solution capable of meeting the complex requirements of modern IoT ecosystems and improving the efficiency of smart homes.

RevDate: 2024-12-02
CmpDate: 2024-12-02

Wang H, Kong X, Phewnil O, et al (2024)

Spatiotemporal prediction of alpine wetlands under multi-climate scenarios in the west of Sichuan, China.

PeerJ, 12:e18586.

BACKGROUND: The alpine wetlands in western Sichuan are distributed along the eastern section of the Qinghai-Tibet Plateau (QTP), where the ecological environment is fragile and highly sensitive to global climate change. These wetlands are already experiencing severe ecological and environmental issues, such as drought, retrogressive succession, and desertification. However, due to the limitations of computational models, previous studies have been unable to adequately understand the spatiotemporal change trends of these alpine wetlands.

METHODS: We employed a large sample and composite supervised classification algorithms to classify alpine wetlands and generate wetland maps, based on the Google Earth Engine cloud computing platform. The thematic maps were then grid-sampled for predictive modeling of future wetland changes. Four species distribution models (SDMs), BIOCLIM, DOMAIN, MAXENT, and GARP were innovatively introduced. Using the WorldClim dataset as environmental variables, we predicted the future distribution of wetlands in western Sichuan under multiple climate scenarios.

RESULTS: The Kappa coefficients for Landsat 8 and Sentinel 2 were 0.89 and 0.91, respectively. Among the four SDMs, MAXENT achieved a higher accuracy (α = 91.6%) for the actual wetland compared to the thematic overlay analysis. The area under the curve (AUC) of the MAXENT model simulations for wetland spatial distribution were all greater than 0.80. This suggests that incorporating the SDM model into land change simulations has high generalizability and significant advantages on a large scale. Furthermore, simulation results reveal that between 2021 and 2100 years, with increasing emission concentrations, highly suitable areas for wetland development exhibit significant spatial differentiation. In particular, wetland areas in high-altitude regions are expected to increase, while low-altitude regions will markedly shrink. The changes in the future spatial distribution of wetlands show a high level of consistency with historical climate changes, with warming being the main driving force behind the spatiotemporal changes in alpine wetlands in western Sichuan, especially evident in the central high-altitude and northern low-altitude areas.

RevDate: 2024-12-03
CmpDate: 2024-11-30

Wu SH, TA Mueller (2024)

A user-friendly NoSQL framework for managing agricultural field trial data.

Scientific reports, 14(1):29819.

Field trials are one of the essential stages in agricultural product development, enabling the validation of products in real-world environments rather than controlled laboratory or greenhouse settings. With the advancement in technologies, field trials often collect a large amount of information with diverse data types from various sources. Managing and organizing extensive datasets can impose challenges for small research teams, especially with constantly evolving data collection processes with multiple collaborators and introducing new data types between studies. A practical database needs to be able to incorporate all these changes seamlessly. We present DynamoField, a flexible database framework for collecting and analyzing field trial data. The backend database for DynamoField is powered by Amazon Web Services DynamoDB, a NoSQL database, and DynamoField also provides a front-end interactive web interface. With the flexibility of the NoSQL database, researchers can modify the database schema based on the data provided by various collaborators and contract research organizations. This framework includes functions for non-technical users, including importing and exporting data, data integration and manipulation, and performing statistical analysis. Researchers can utilize cloud computing to establish a secure NoSQL database with minimum maintenance, this also enables collaboration with others worldwide and adapt to different data-collecting strategies as their research progresses. DynamoField is implemented in Python, and it is publicly available at https://github.com/ComputationalAgronomy/DynamoField .

RevDate: 2024-11-28
CmpDate: 2024-11-28

Hillebrand FL, Prieto JD, Mendes Júnior CW, et al (2024)

Gray Level Co-occurrence Matrix textural analysis for temporal mapping of sea ice in Sentinel-1A SAR images.

Anais da Academia Brasileira de Ciencias, 96(suppl 2):e20240554 pii:S0001-37652024000401106.

Sea ice is a critical component of the cryosphere and plays a role in the heat and moisture exchange processes between the ocean and atmosphere, thus regulating the global climate. With climate change, detailed monitoring of changes occurring in sea ice is necessary. Therefore, an analysis was conducted to evaluate the potential of using the Gray Level Co-occurrence Matrix (GLCM) texture analysis combined with the backscattering coefficient (σ°) of HH polarization in Sentinel-1A Synthetic Aperture Radar (SAR) images, interferometric imaging mode, for mapping sea ice in time series. Data processing was performed using cloud computing on the Google Earth Engine platform with routines written in JavaScript. To train the Random Forest (RF) classifier, samples of regions with open water and sea ice were obtained through visual interpretation of false-color SAR images from Sentinel-1B in the extra-wide swath imaging mode. The analysis demonstrated that training samples used in the RF classifier from a specific date can be applied to images from other dates within the freezing period, achieving accuracies ≥ 90% when using 64-bit grayscale quantization in GLCM combined with σ° data. However, when using only σ° data in the RF classifier, accuracies ≥ 93% were observed.

RevDate: 2024-12-09

Ricotta EE, Bents S, Lawler B, et al (2024)

Search interest in alleged COVID-19 treatments over the pandemic period: the impact of mass news media.

medRxiv : the preprint server for health sciences.

BACKGROUND: Understanding how individuals obtain medical information, especially amid changing guidance, is important for improving outreach and communication strategies. In particular, during a public health emergency, interest in unsafe or illegitimate medications can delay access to appropriate treatments and foster mistrust in the medical system, which can be detrimental at both individual and population levels. It is thus key to understand factors associated with said interest.

METHODS: We obtained US-based Google Search Trends and Media Cloud data from 2019-2022 to assess the relationship between Internet search interest and media coverage in three purported COVID-19 treatments: hydroxychloroquine, ivermectin, and remdesivir. We first conducted anomaly detection in the treatment-specific search interest data to detect periods of interest above pre-pandemic baseline; we then used multilevel negative binomial regression-controlling for political leaning, rurality, and social vulnerability-to test for associations between treatment-specific search interest and media coverage.

FINDINGS: We observed that interest in hydroxychloroquine and remdesivir peaked early in 2020 and then subsided, while peak interest in ivermectin occurred later but was more sustained. We detected significant associations between media coverage and search interest for all three treatments. The strongest association was observed for ivermectin, in which a single standard deviation increase in media coverage was associated with more than double the search interest (164%, 95% CI: 148, 180), compared to a 109% increase (95% CI: 101, 118) for hydroxychloroquine and a 49% increase (95% CI: 43, 55) for remdesivir.

INTERPRETATION: Search interest in purported COVID-19 treatments was significantly associated with contemporaneous media coverage, with the highest impact on interest in ivermectin, a treatment demonstrated to be ineffectual for treating COVID-19 and potentially dangerous if used inappropriately.

FUNDING: This work was funded in part by the US National Institutes of Health and the US National Science Foundation.

RevDate: 2024-11-28

G C S, Koparan C, Upadhyay A, et al (2024)

A novel automated cloud-based image datasets for high throughput phenotyping in weed classification.

Data in brief, 57:111097.

Deep learning-based weed detection data management involves data acquisition, data labeling, model development, and model evaluation phases. Out of these data management phases, data acquisition and data labeling are labor-intensive and time-consuming steps for building robust models. In addition, low temporal variation of crop and weed in the datasets is one of the limiting factors for effective weed detection model development. This article describes the cloud-based automatic data acquisition system (CADAS) to capture the weed and crop images in fixed time intervals to take plant growth stages into account for weed identification. The CADAS was developed by integrating fifteen digital cameras in the visible spectrum with gphoto2 libraries, external storage, cloud storage, and a computer with Linux operating system. Dataset from CADAS system contain six weed species and eight crop species for weed and crop detection. A dataset of 2000 images per weed and crop species was publicly released. Raw RGB images underwent a cropping process guided by bounding box annotations to generate individual JPG images for crop and weed instances. In addition to cropped image 200 raw images with label files were released publicly. This dataset hold potential for investigating challenges in deep learning-based weed and crop detection in agricultural settings. Additionally, this data could be used by researcher along with field data to boost the model performance by reducing data imbalance problem.

RevDate: 2025-01-04

Geng J, Voitiuk K, Parks DF, et al (2024)

Multiscale Cloud-Based Pipeline for Neuronal Electrophysiology Analysis and Visualization.

bioRxiv : the preprint server for biology.

Electrophysiology offers a high-resolution method for real-time measurement of neural activity. Longitudinal recordings from high-density microelectrode arrays (HD-MEAs) can be of considerable size for local storage and of substantial complexity for extracting neural features and network dynamics. Analysis is often demanding due to the need for multiple software tools with different runtime dependencies. To address these challenges, we developed an open-source cloud-based pipeline to store, analyze, and visualize neuronal electrophysiology recordings from HD-MEAs. This pipeline is dependency agnostic by utilizing cloud storage, cloud computing resources, and an Internet of Things messaging protocol. We containerized the services and algorithms to serve as scalable and flexible building blocks within the pipeline. In this paper, we applied this pipeline on two types of cultures, cortical organoids and ex vivo brain slice recordings to show that this pipeline simplifies the data analysis process and facilitates understanding neuronal activity.

RevDate: 2024-12-07

Papudeshi B, Roach MJ, Mallawaarachchi V, et al (2024)

phage therapy candidates from Sphae: An automated toolkit for predicting sequencing data.

bioRxiv : the preprint server for biology.

MOTIVATION: Phage therapy is a viable alternative for treating bacterial infections amidst the escalating threat of antimicrobial resistance. However, the therapeutic success of phage therapy depends on selecting safe and effective phage candidates. While experimental methods focus on isolating phages and determining their lifecycle and host range, comprehensive genomic screening is critical to identify markers that indicate potential risks, such as toxins, antimicrobial resistance, or temperate lifecycle traits. These analyses are often labor-intensive and time-consuming, limiting the rapid deployment of phage in clinical settings.

RESULTS: We developed Sphae, an automated bioinformatics pipeline designed to streamline therapeutic potential of a phage in under ten minutes. Using Snakemake workflow manager, Sphae integrates tools for quality control, assembly, genome assessment, and annotation tailored specifically for phage biology. Sphae automates the detection of key genomic markers, including virulence factors, antimicrobial resistance genes, and lysogeny indicators like integrase, recombinase, and transposase, which could preclude therapeutic use. Benchmarked on 65 phage sequences, 28 phage samples showed therapeutic potential, 8 failed during assembly due to low sequencing depth, 22 samples included prophage or virulent markers, and the remaining 23 samples included multiple phage genomes per sample. This workflow outputs a comprehensive report, enabling rapid assessment of phage safety and suitability for phage therapy under these criteria. Sphae is scalable, portable, facilitating efficient deployment across most high-performance computing (HPC) and cloud platforms, expediting the genomic evaluation process.

AVAILABILITY: Sphae is source code and freely available at https://github.com/linsalrob/sphae, with installation supported on Conda, PyPi, Docker containers.

RevDate: 2024-12-11
CmpDate: 2024-11-28

Hasan R, Kapoor A, Singh R, et al (2024)

A state-of-the-art review on the quantitative and qualitative assessment of water resources using google earth engine.

Environmental monitoring and assessment, 196(12):1266.

Water resource management is becoming essential due to many anthropogenic and climatic factors resulting in dwindling water resources. Traditionally, geographic information systems (GIS) and remote sensing (RS) have long been instrumental in water resource assessment and management as the satellites or airborne units are periodically utilized to collect data from large areal extent. However, these platforms have limited computational capability and localized storage systems. Recently, these limitations have been overcome by the application of Google Earth Engine (GEE) that offers a faster and more reliable cloud-based GIS and remote sensing platform that leverages its parallel processing capabilities. Thereby, in recent years, GEE has witnessed rapid and accelerated adoption and usage in a wide variety of domains, including water resource monitoring, assessment and management. However, no systematic studies have been made to review the GEE application in water resource management. This review article is a maiden attempt towards developing an understanding of the functioning of GEE and its application in water resource assessment, covering both of its aspects viz (a) water quantity and (b) water quality. The review further attempts to illustrate its capabilities in real-world utility, through a case study conducted to analyze water quality and quantity of lake mead, a reservoir of Hoover Dam, Nevada (USA), at a monthly scale for a 3-year period spanning from 2021 to 2023. The results of this case study showcase the applicability of GEE to the water resource quantity and quality monitoring, assessment and management problems. The review further discusses the existing challenges with the application of GEE in water resource assessment and the scope for further improvement. In conclusion, after tackling the existing challenges with GEE, the application of GEE in water resources has huge potential for management planning of our water resources by addressing the forthcoming challenges.

RevDate: 2024-11-30

Jia L, Sun B, Tan W, et al (2024)

Special Issue: Artificial Intelligence and Smart Sensor-Based Industrial Advanced Technology.

Sensors (Basel, Switzerland), 24(22):.

With the rapid growth of smart sensors and industrial data, artificial intelligence (AI) technology (such as machine learning, machine vision, multi-sensor fusion, cloud computing, edge computing, digital twins, etc [...].

RevDate: 2024-11-30

Yang Z, Wang M, S Xie (2024)

A Comprehensive Framework for Transportation Infrastructure Digitalization: TJYRoad-Net for Enhanced Point Cloud Segmentation.

Sensors (Basel, Switzerland), 24(22):.

This research introduces a cutting-edge approach to traffic infrastructure digitization, integrating UAV oblique photography with LiDAR point clouds for high-precision, lightweight 3D road modeling. The proposed method addresses the challenge of accurately capturing the current state of infrastructure while minimizing redundancy and optimizing computational efficiency. A key innovation is the development of the TJYRoad-Net model, which achieves over 85% mIoU segmentation accuracy by including a traffic feature computing (TFC) module composed of three critical components: the Regional Coordinate Encoder (RCE), the Context-Aware Aggregation Unit (CAU), and the Hierarchical Expansion Block. Comparative analysis segments the point clouds into road and non-road categories, achieving centimeter-level registration accuracy with RANSAC and ICP. Two lightweight surface reconstruction techniques are implemented: (1) algorithmic reconstruction, which delivers a 6.3 mm elevation error at 95% confidence in complex intersections, and (2) template matching, which replaces road markings, poles, and vegetation using bounding boxes. These methods ensure accurate results with minimal memory overhead. The optimized 3D models have been successfully applied in driving simulation and traffic flow analysis, providing a practical and scalable solution for real-world infrastructure modeling and analysis. These applications demonstrate the versatility and efficiency of the proposed methods in modern traffic system simulations.

RevDate: 2024-11-28

Shefa FR, Sifat FH, Uddin J, et al (2024)

Deep Learning and IoT-Based Ankle-Foot Orthosis for Enhanced Gait Optimization.

Healthcare (Basel, Switzerland), 12(22):.

BACKGROUND/OBJECTIVES: This paper proposes a method for managing gait imbalances by integrating the Internet of Things (IoT) and machine learning technologies. Ankle-foot orthosis (AFO) devices are crucial medical braces that align the lower leg, ankle, and foot, offering essential support for individuals with gait imbalances by assisting weak or paralyzed muscles. This research aims to revolutionize medical orthotics through IoT and machine learning, providing a sophisticated solution for managing gait issues and enhancing patient care with personalized, data-driven insights.

METHODS: The smart ankle-foot orthosis (AFO) is equipped with a surface electromyography (sEMG) sensor to measure muscle activity and an Inertial Measurement Unit (IMU) sensor to monitor gait movements. Data from these sensors are transmitted to the cloud via fog computing for analysis, aiming to identify distinct walking phases, whether normal or aberrant. This involves preprocessing the data and analyzing it using various machine learning methods, such as Random Forest, Decision Tree, Support Vector Machine (SVM), Artificial Neural Network (ANN), Long Short-Term Memory (LSTM), and Transformer models.

RESULTS: The Transformer model demonstrates exceptional performance in classifying walking phases based on sensor data, achieving an accuracy of 98.97%. With this preprocessed data, the model can accurately predict and measure improvements in patients' walking patterns, highlighting its effectiveness in distinguishing between normal and aberrant phases during gait analysis.

CONCLUSIONS: These predictive capabilities enable tailored recommendations regarding the duration and intensity of ankle-foot orthosis (AFO) usage based on individual recovery needs. The analysis results are sent to the physician's device for validation and regular monitoring. Upon approval, the comprehensive report is made accessible to the patient, ensuring continuous progress tracking and timely adjustments to the treatment plan.

RevDate: 2024-11-30

Beňo L, Kučera E, Drahoš P, et al (2024)

Transforming industrial automation: voice recognition control via containerized PLC device.

Scientific reports, 14(1):29387.

The article discusses the impact of voice recognition and containerization technologies in the industrial sector, particularly on Programmable Logic Controller (PLC) devices. It highlights how voice assistants like Alexa, Siri, Cortana, and Google Assistant are pioneering and pushing the future of human-machine interfaces, with applications moving from smart homes to industrial automation. Containerization, illustrated by Docker, is transforming software deployment practices, offering benefits such as enhanced portability, modular architecture, and improved security when applied to industrial PLCs. The article introduces a novel approach to enhancing human-machine interfaces (HMIs) within industrial applications, leveraging voice recognition and containerization technologies on Programmable Logic Controllers (PLCs). Unlike traditional systems, this article integrates voice assistant with industrial PLCs through a containerized IoT architecture. This innovative framework enables efficient deployment on edge devices, supporting modular, portable, and secure operations aligned with Industry 4.0 and 5.0 paradigms. The study further includes a detailed implementation on microcontrollers and industrial PLCs, validating its application in a controlled laboratory environment and virtual model.

RevDate: 2024-11-24
CmpDate: 2024-11-22

K Karim F, Ghorashi S, Alkhalaf S, et al (2024)

Optimizing makespan and resource utilization in cloud computing environment via evolutionary scheduling approach.

PloS one, 19(11):e0311814.

As a new computing resources distribution platform, cloud technology greatly influenced society with the conception of on-demand resource usage through virtualization technology. Virtualization technology allows physical resource usage in a way that will enable multiple end-users to have similar hardware infrastructure. In the cloud, many challenges exist on the provider side due to the expectations of clients. Resource scheduling (RS) is the most significant nondeterministic polynomial time (NP) hard problem in the cloud, owing to its crucial impact on cloud performance. Previous research found that metaheuristics can dramatically increase CC performance if deployed as scheduling algorithms. Therefore, this study develops an evolutionary algorithm-based scheduling approach for makespan optimization and resource utilization (EASA-MORU) technique in the cloud environment. The EASA-MORU technique aims to maximize the makespan and effectively use the resources in the cloud infrastructure. In the EASA-MORU technique, the dung beetle optimization (DBO) technique is used for scheduling purposes. Moreover, the EASA-MORU technique balances the load properly and distributes the resources based on the demands of the cloud infrastructure. The performance evaluation of the EASA-MORU method is tested using a series of performance measures. A wide range of comprehensive comparison studies emphasized that the EASA-MORU technique performs better than other methods in different evaluation measures.

RevDate: 2024-11-22

Goh C, Puah M, Toh ZH, et al (2024)

Mobile Apps and Visual Function Assessment: A Comprehensive Review of the Latest Advancements.

Ophthalmology and therapy [Epub ahead of print].

INTRODUCTION: With technological advancements and the growing prevalence of smartphones, ophthalmology has opportunely harnessed medical technology for visual function assessment as a home monitoring tool for patients. Ophthalmology applications that offer these have likewise become more readily available in recent years, which may be used for early detection and monitoring of eye conditions. To date, no review has been done to evaluate and compare the utility of these apps. This review provides an updated overview of visual functions assessment using mobile applications available on the Apple App and Google Play Stores, enabling eye care professionals to make informed selections of their use in ophthalmology.

METHODS: We reviewed 160 visual function applications available on Apple iTunes and the Google Play Stores. The parameters surveyed included types of visual function tests, the involvement of healthcare professionals in their development, cost, and download count.

RESULTS: Visual tests, including visual acuity and color vision tests, were most common among apps surveyed, and they were comparable to traditional clinical methods. Certain applications were more widely used, some of which have had studies conducted to assess the reliability of test results. Limitations of these apps include the absence of healthcare professionals' involvement in their development, the lack of approval by regulatory authorities and minimal cloud-based features to communicate results to healthcare professionals.

CONCLUSIONS: The prevalence and easy access of visual function testing applications present opportunities to enhance teleophthalmology through early detection and monitoring of eye conditions. Future development to enhance the quality of the apps should involve regulatory bodies and medical professionals, followed up by research using larger samples with longer follow-up studies to review the reliability and validity of ophthalmology applications. This would potentially enable these applications to be incorporated into the comprehensive assessment and follow-up care of patients' eye health.

RevDate: 2024-11-22

Cao M, Ramezani R, Katakwar VK, et al (2024)

Developing remote patient monitoring infrastructure using commercially available cloud platforms.

Frontiers in digital health, 6:1399461.

Wearable sensor devices for continuous patient monitoring produce a large volume of data, necessitating scalable infrastructures for efficient data processing, management and security, especially concerning Patient Health Information (PHI). Adherence to the Health Insurance Portability and Accountability Act (HIPAA), a legislation that mandates developers and healthcare providers to uphold a set of standards for safeguarding patients' health information and privacy, further complicates the development of remote patient monitoring within healthcare ecosystems. This paper presents an Internet of Things (IoT) architecture designed for the healthcare sector, utilizing commercial cloud platforms like Microsoft Azure and Amazon Web Services (AWS) to develop HIPAA-compliant health monitoring systems. By leveraging cloud functionalities such as scalability, security, and load balancing, the architecture simplifies the creation of infrastructures adhering to HIPAA standards. The study includes a cost analysis of Azure and AWS infrastructures and evaluates data processing speeds and database query latencies, offering insights into their performance for healthcare applications.

RevDate: 2024-11-21

Huang W, Liu X, Tian L, et al (2024)

Vegetation and carbon sink response to water level changes in a seasonal lake wetland.

Frontiers in plant science, 15:1445906.

Water level fluctuations are among the main factors affecting the development of wetland vegetation communities, carbon sinks, and ecological processes. Hongze Lake is a typical seasonal lake wetland in the Huaihe River Basin. Its water levels have experienced substantial fluctuations because of climate change, as well as gate and dam regulations. In this study, long-term cloud-free remote sensing images of water body area, net plant productivity (NPP), gross primary productivity (GPP), and Fractional vegetation cover (FVC) of the wetlands of Hongze Lake were obtained from multiple satellites by Google Earth Engine (GEE) from 2006 to 2023. The trends in FVC were analyzed using a combined Theil-Sen estimator and Mann-Kendall (MK) test. Linear regression was employed to analyze the correlation between the area of water bodies and that of different degrees of FVC. Additionally, annual frequencies of various water levels were constructed to explore their association with GPP, NPP, and FVC.The results showed that water level fluctuations significantly influence the spatial and temporal patterns of wetland vegetation cover and carbon sinks, with a significant correlation (P<0.05) between water levels and vegetation distribution. Following extensive restoration efforts, the carbon sink capacity of the Hongze Lake wetland has increased. However, it is essential to consider the carbon sink capacity in areas with low vegetation cover, for the lakeshore zone with a higher inundation frequency and low vegetation cover had a lower carbon sink capacity. These findings provide a scientific basis for the establishment of carbon sink enhancement initiatives, restoration programs, and policies to improve the ecological value of wetland ecosystem conservation areas.

RevDate: 2025-01-09
CmpDate: 2024-12-06

Kullo IJ, Conomos MP, Nelson SC, et al (2024)

The PRIMED Consortium: Reducing disparities in polygenic risk assessment.

American journal of human genetics, 111(12):2594-2606.

By improving disease risk prediction, polygenic risk scores (PRSs) could have a significant impact on health promotion and disease prevention. Due to the historical oversampling of populations with European ancestry for genome-wide association studies, PRSs perform less well in other, understudied populations, leading to concerns that clinical use in their current forms could widen health care disparities. The PRIMED Consortium was established to develop methods to improve the performance of PRSs in global populations and individuals of diverse genetic ancestry. To this end, PRIMED is aggregating and harmonizing multiple phenotype and genotype datasets on AnVIL, an interoperable secure cloud-based platform, to perform individual- and summary-level analyses using population and statistical genetics approaches. Study sites, the coordinating center, and representatives from the NIH work alongside other NHGRI and global consortia to achieve these goals. PRIMED is also evaluating ethical and social implications of PRS implementation and investigating the joint modeling of social determinants of health and PRS in computing disease risk. The phenotypes of interest are primarily cardiometabolic diseases and cancer, the leading causes of death and disability worldwide. Early deliverables of the consortium include methods for data sharing on AnVIL, development of a common data model to harmonize phenotype and genotype data from cohort studies as well as electronic health records, adaptation of recent guidelines for population descriptors to global cohorts, and sharing of PRS methods/tools. As a multisite collaboration, PRIMED aims to foster equity in the development and use of polygenic risk assessment.

RevDate: 2024-11-21

ElSayyad SE, Saleh AI, Ali HA, et al (2024)

An effective robot selection and recharge scheduling approach for improving robotic networks performance.

Scientific reports, 14(1):28439.

With the ability of servers to remotely control and manage a mobile robot, mobile robots are becoming more widespread as a form of remote communication and human-robot interaction. Controlling these robots, however, can be challenging because of their power consumption, delays, or the challenge of selecting the right robot for a certain task. This paper introduces a novel methodology for enhancing the efficacy of a mobile robotic network. The key two contributions of our suggested methodology are: I: A recommended strategy that eliminates the unwieldy robots before selecting the ideal robot to satisfy the task. II: A suggested procedure that uses a fuzzy algorithm to schedule the robots that need to be recharged. Since multiple robots may need to be recharged at once, this process aims to manage and control the recharging of robots in order to avoid conflicts or crowding. The suggested approach aims to preserve the charging capacity, physical resources (e.g. Hardware components), and battery life of the robots by loading the application onto a remote server node instead of individual robots. Furthermore, our solution makes use of fog servers to speed up data transfers between smart devices and the cloud, it is also used to move processing from remote cloud servers closer to the robots, improving on-site access to location-based services and real-time interaction. Simulation results showed that, our method achieved a 2.4% improvement in average accuracy and a 2.2% enhancement in average power usage over the most recent methods in the same comparable settings.

RevDate: 2024-12-11
CmpDate: 2024-11-18

Kumar A, Singh D, Kumar S, et al (2024)

Sunflower mapping using machine learning algorithm in Google Earth Engine platform.

Environmental monitoring and assessment, 196(12):1208.

The sunflower crop is one of the most pro sources of vegetable oil globally. It is cultivated all around the world including Haryana, in India. However, its mapping is limited due to the requirement of huge computation power, large data storage capacity, small farm holdings, and information gap on appropriate algorithms and spectral band combinations. Thus, the current work has been done to identify an appropriate machine learning (ML) algorithm (after comparing random forest (RF) and support vector machine (SVM) reported as the best classifiers for land use and land cover) and best band combinations (among the six combinations (including Sentinel-Optical, Sentinel-SAR, and combined-Optical-SAR in single data and time series manner) for Sunflower crop mapping in Ambala and Kurukshetra districts of Haryana using Google Earth Engine (GEE) cloud platform. GEE cloud-computing system combined with RF and SVM provided Sunflower map with an accuracy ranging from 0.0% to 90% in various bands and classifiers combinations but was the highest for the RF with single date optical data. The SVM classifier tuned with parameters like kernel type, degree, gamma, and cost provided better overall accuracy for the classification of land use and land cover along with Sunflower ranging from 98.09% to 98.44% and Kappa coefficient ranging from 0.96 to 0.97 for optical data and combination of SAR and optical time series. The platform is efficient and applicable for a larger part of the country to map Sunflower and other crops with currently identified combinations of satellite data and methodology due to the availability of satellite images, advanced ML algorithms, and analytical modules on a single platform.

RevDate: 2024-11-18

Wang W, He J, S Yang (2024)

Planning for a cooler metropolitan area: a perspective on the long-term interaction of urban expansion, surface urban heat islands and blue-green spaces' cooling impact.

International journal of biometeorology [Epub ahead of print].

Urbanization is widely acknowledged as a driving force behind the increase in land surface temperature (LST), while blue-green spaces (BGS) are recognized for their cooling effect. However, research on the long-term correlation between the two in highly urbanized areas remains limited. This study aims to fill this research gap by investigating the correlation and changes between urban expansion-induced LST rise and the cooling effect of BGS in the Hangzhou metropolitan area from 2000 to 2020. Our approach combines Geographic Information System (GIS), Remote Sensing (RS), and Google Earth Engine (GEE) cloud platforms, utilizing a random forest land use classification technique in conjunction with the Geographically and temporally weighted regression (GTWR) model. The findings reveal a strong relationship between land expansion and the intensification of the surface urban heat island (SUHI) effect. The spatial heat island effect exhibits an exponential expansion in area, with an interannual LST rise of 0.4 °C. Notably, urban centers exert the highest regional heat contribution, while remote suburbs have the most significant impact on reducing LST. The impact of BGS on LST varies, fluctuating more in areas close to urban centers and less in water-rich areas. This study contributes to a better understanding of the cooling potential of BGS in rapid urbanized Metropolitan, offering valuable insights for sustainable urban planning.

RevDate: 2024-11-30

Herbozo Contreras LF, Truong ND, Eshraghian JK, et al (2024)

Neuromorphic neuromodulation: Towards the next generation of closed-loop neurostimulation.

PNAS nexus, 3(11):pgae488.

Neuromodulation techniques have emerged as promising approaches for treating a wide range of neurological disorders, precisely delivering electrical stimulation to modulate abnormal neuronal activity. While leveraging the unique capabilities of AI holds immense potential for responsive neurostimulation, it appears as an extremely challenging proposition where real-time (low-latency) processing, low-power consumption, and heat constraints are limiting factors. The use of sophisticated AI-driven models for personalized neurostimulation depends on the back-telemetry of data to external systems (e.g. cloud-based medical mesosystems and ecosystems). While this can be a solution, integrating continuous learning within implantable neuromodulation devices for several applications, such as seizure prediction in epilepsy, is an open question. We believe neuromorphic architectures hold an outstanding potential to open new avenues for sophisticated on-chip analysis of neural signals and AI-driven personalized treatments. With more than three orders of magnitude reduction in the total data required for data processing and feature extraction, the high power- and memory-efficiency of neuromorphic computing to hardware-firmware co-design can be considered as the solution-in-the-making to resource-constraint implantable neuromodulation systems. This perspective introduces the concept of Neuromorphic Neuromodulation, a new breed of closed-loop responsive feedback system. It highlights its potential to revolutionize implantable brain-machine microsystems for patient-specific treatment.

RevDate: 2024-12-20

Mooselu MG, Nikoo MR, Liltved H, et al (2024)

Assessing road construction effects on turbidity in adjacent water bodies using Sentinel-1 and Sentinel-2.

The Science of the total environment, 957:177554.

Road construction significantly affects water resources by introducing contaminants, fragmenting habitats, and degrading water quality. This study examines the use of Remote Sensing (RS) data of Sentinel-1 (S1) and Senitnel-2 (S2) in Google Earth Engine (GEE) to do spatio-temporal analysis of turbidity in adjacent water bodies during the construction and operation of the E18 Arendal-Tvedestrand highway in southeastern Norway from 2017 to 2021. S1 radiometric data helped delineate water extents, while S2-Top of Atmosphere (TOA) multispectral data, corrected using the Modified Atmospheric correction for INland waters (MAIN), used to estimate turbidity levels. To ensure a comprehensive time series of RS data, we utilized S2-TOA data corrected with the MAIN algorithm rather than S2-Bottom Of Atmosphere (BOA) data. We validated the MAIN algorithm's accuracy against GLORIA (Global Observatory of Lake Responses to Interventions and Drivers) observations of surface water reflectance in lakes, globally. Subsequently, the corrected S2 data is used to calculate turbidity using the Novoa and Nechad retrieval algorithms and compared with GLORIA turbidity observations. Findings indicate that the MAIN algorithm adequately estimates water-leaving surface reflectance (Pearson correlation > 0.7 for wavelengths between 490 and 705 nm) and turbidity (Pearson correlation > 0.6 for both algorithms), determining Nechad as the more effective algorithm. In this regard, we used S2 corrected images with MIAN to estimate turbidity in the study area and evaluated with local gauge data and observational reports. Results indicate that the proposed framework effectively captures trends and patterns of turbidity variation in the study area. Findings verify that road construction can increase turbidity in adjacent water bodies and emphasis the employing RS data in cloud platforms like GEE can provide insights for effective long-term water quality management strategies during construction and operation phases.

RevDate: 2024-11-18

Guo H, Huang R, Z Xu (2024)

The design of intelligent highway transportation system in smart city based on the internet of things.

Scientific reports, 14(1):28122 pii:10.1038/s41598-024-79903-0.

The design of intelligent expressway transportation system based on the Internet of Things is studied to improve the safety, travel experience, and operation management of expressway. The characteristics of the Internet of Things and cloud computing technology and its application on the expressway are analyzed, and the system design requirements of expressway intelligent transportation are understood. Besides, the overall architecture of the system is studied and designed. The IaaS layer, PaaS layer, and SaaS layer of the cloud platform are designed and deployed. The intelligent information system can make expressway highly informative. The simulation experiments reveal that the system only needs 120 milliseconds in accident processing time, which is far lower than the intelligent transportation system that only uses edge computing technology (201 milliseconds) and the intelligent transportation system that only uses cloud computing technology (443 milliseconds). Meanwhile, the accident response time is only 12 s, which is also superior to other models. In terms of cost-effectiveness, the monthly cost of the system is 7004 yuan, with a CPU utilization rate of 53%, demonstrating good cost-effectiveness and resource utilization efficiency. In addition, compared with the existing system, the average traffic congestion time has been reduced by 25%, the traffic accident rate has been reduced by 18%, and the accident rate has been reduced by 27%. The intelligent traffic system design of expressway, expressway safety, travel service, and operation management is effectively improved by researching the intelligent traffic system design of expressway.

RevDate: 2024-11-15

Parente DJ (2024)

Leveraging the All of Us Database for Primary Care Research with Large Datasets.

Journal of the American Board of Family Medicine : JABFM pii:jabfm.2023.230453R2 [Epub ahead of print].

The National Institutes of Health (NIH) are supporting the All of Us research program, a large multicenter initiative to accelerate precision medicine. The All of Us database contains information on greater than 400,000 individuals spanning thousands of medical conditions, drug exposure types, and laboratory test types. These data can be correlated with genomic information and with survey data on social and environmental factors which influence health. A core principle of the All of Us program is that participants should reflect the diversity present in the United States population.The All of Us database has advanced many areas of medicine but is currently underutilized by primary care and public health researchers. In this Special Communication article, I seek to reduce the "barrier to entry" for primary care researchers to develop new projects within the All of Us Researcher Workbench. This Special Communication discusses (1) obtaining access to the database, (2) using the database securely and responsibly, (3) the key design concepts of the Researcher Workbench, and (4) details of data set extraction and analysis in the cloud computing environment. Fully documented, tutorial R statistical programming language and Python programs are provided alongside this article, which researchers may freely adapt under the open-source MIT license. The primary care research community should use the All of Us database to accelerate innovation in primary care research, make epidemiologic discoveries, promote community health, and further the infrastructure-building strategic priority of the family medicine 2024 to 2030 National Research Strategy.

RevDate: 2024-11-17

Batchu RK, Bikku T, Thota S, et al (2024)

A novel optimization-driven deep learning framework for the detection of DDoS attacks.

Scientific reports, 14(1):28024 pii:10.1038/s41598-024-77554-9.

Distributed denial of service (DDoS) attack is one of the most hazardous assaults in cloud computing or networking. By depleting resources, this attack renders the services unavailable to end users and leads to significant financial and reputational damage. Hence, identifying such threats is crucial to minimize revenue loss, market share, and productivity loss and enhance the brand reputation. In this study, we implemented an effective intrusion detection system using deep learning approach. The suggested framework includes three phases: Data pre-processing, Data balancing, and Classification. First, we prepare the valid data, which is helpful for further processing. Then, we balance the given pre-processed data by Conditional generative adversarial network (CGAN), and as a result, we can minimize the bias towards the majority classes. Finally, we distinguish whether the traffic is attack or benign using a stacked sparse denoising autoencoder (SSDAE) with a firefly-black widow (FA-BW) hybrid optimization algorithm. All these experiments are validated through the CICDDoS2019 dataset and compared with well-received techniques. From these findings, we observed that the proposed strategy detects DDoS attacks significantly more accurately than other approaches. Based on our findings, this study highlights the crucial role played by advanced deep learning techniques and hybrid optimization algorithms in strengthening cybersecurity against DDoS attacks.

RevDate: 2024-12-01
CmpDate: 2024-11-14

Nagarajan R, Kondo M, Salas F, et al (2024)

Economics and Equity of Large Language Models: Health Care Perspective.

Journal of medical Internet research, 26:e64226.

Large language models (LLMs) continue to exhibit noteworthy capabilities across a spectrum of areas, including emerging proficiencies across the health care continuum. Successful LLM implementation and adoption depend on digital readiness, modern infrastructure, a trained workforce, privacy, and an ethical regulatory landscape. These factors can vary significantly across health care ecosystems, dictating the choice of a particular LLM implementation pathway. This perspective discusses 3 LLM implementation pathways-training from scratch pathway (TSP), fine-tuned pathway (FTP), and out-of-the-box pathway (OBP)-as potential onboarding points for health systems while facilitating equitable adoption. The choice of a particular pathway is governed by needs as well as affordability. Therefore, the risks, benefits, and economics of these pathways across 4 major cloud service providers (Amazon, Microsoft, Google, and Oracle) are presented. While cost comparisons, such as on-demand and spot pricing across the cloud service providers for the 3 pathways, are presented for completeness, the usefulness of managed services and cloud enterprise tools is elucidated. Managed services can complement the traditional workforce and expertise, while enterprise tools, such as federated learning, can overcome sample size challenges when implementing LLMs using health care data. Of the 3 pathways, TSP is expected to be the most resource-intensive regarding infrastructure and workforce while providing maximum customization, enhanced transparency, and performance. Because TSP trains the LLM using enterprise health care data, it is expected to harness the digital signatures of the population served by the health care system with the potential to impact outcomes. The use of pretrained models in FTP is a limitation. It may impact its performance because the training data used in the pretrained model may have hidden bias and may not necessarily be health care-related. However, FTP provides a balance between customization, cost, and performance. While OBP can be rapidly deployed, it provides minimal customization and transparency without guaranteeing long-term availability. OBP may also present challenges in interfacing seamlessly with downstream applications in health care settings with variations in pricing and use over time. Lack of customization in OBP can significantly limit its ability to impact outcomes. Finally, potential applications of LLMs in health care, including conversational artificial intelligence, chatbots, summarization, and machine translation, are highlighted. While the 3 implementation pathways discussed in this perspective have the potential to facilitate equitable adoption and democratization of LLMs, transitions between them may be necessary as the needs of health systems evolve. Understanding the economics and trade-offs of these onboarding pathways can guide their strategic adoption and demonstrate value while impacting health care outcomes favorably.

RevDate: 2025-01-04
CmpDate: 2024-12-13

Hasavari S, P Esmaeilzadeh (2024)

Appropriately Matching Transport Care Units to Patients in Interhospital Transport Care: Implementation Study.

JMIR formative research, 8:e65626.

BACKGROUND: In interfacility transport care, a critical challenge exists in accurately matching ambulance response levels to patients' needs, often hindered by limited access to essential patient data at the time of transport requests. Existing systems cannot integrate patient data from sending hospitals' electronic health records (EHRs) into the transfer request process, primarily due to privacy concerns, interoperability challenges, and the sensitive nature of EHR data. We introduce a distributed digital health platform, Interfacility Transport Care (ITC)-InfoChain, designed to solve this problem without compromising EHR security or data privacy.

OBJECTIVE: This study aimed to detail the implementation of ITC-InfoChain, a secure, blockchain-based platform designed to enhance real-time data sharing without compromising data privacy or EHR security.

METHODS: The ITC-InfoChain platform prototype was implemented on Amazon Web Services cloud infrastructure, using Hyperledger Fabric as a permissioned blockchain. Key elements included participant registration, identity management, and patient data collection isolated from the sending hospital's EHR system. The client program submits encrypted patient data to a distributed ledger, accessible to the receiving facility's critical care unit at the time of transport request and emergency medical services (EMS) teams during transport through the PatienTrack web app. Performance was evaluated through key performance indicators such as data transaction times and scalability across transaction loads.

RESULTS: The ITC-InfoChain demonstrated strong performance and scalability. Data transaction times averaged 3.1 seconds for smaller volumes (1-20 transactions) and 6.4 seconds for 100 transactions. Optimized configurations improved processing times to 1.8-1.9 seconds for 400 transactions. These results confirm the platform's capacity to handle high transaction volumes, supporting timely, real-time data access for decision-making during transport requests and patient transfers.

CONCLUSIONS: The ITC-InfoChain platform addresses the challenge of matching appropriate transport units to patient needs by ensuring data privacy, integrity, and real-time data sharing, enhancing the coordination of patient care. The platform's success suggests potential for regional pilots and broader adoption in secure health care systems. Stakeholder resistance due to blockchain unfamiliarity and data privacy concerns remains. Funding has been sought to support a pilot program to address these challenges through targeted education and engagement.

RevDate: 2024-11-14

Gupta R, Zuquim G, H Tuomisto (2024)

Seamless Landsat-7 and Landsat-8 data composites covering all Amazonia.

Data in brief, 57:111034.

The use of satellite remote sensing has considerably improved scientific understanding of the heterogeneity of Amazonian rainforests. However, the persistent cloud cover and strong Bidirectional Reflectance Distribution Function (BRDF) effects make it difficult to produce up-to-date satellite image composites over the huge extent of Amazonia. Advanced pre-processing and pixel-based compositing over an extended time period are needed to fill the data gaps caused by clouds and to achieve consistency in pixel values across space. Recent studies have found that the multidimensional median, also known as medoid, algorithm is robust to outliers and noise, and thereby provides a useful approach for pixel-based compositing. Here we describe Landsat-7 and Landsat-8 composites covering all Amazonia that were produced using Landsat data from the years 2013-2021 and processed with Google Earth Engine (GEE). These products aggregate reflectance values over a relatively long time, and are, therefore, especially useful for identifying permanent characteristics of the landscape, such as vegetation heterogeneity that is driven by differences in geologically defined edaphic conditions. To make similar compositing possible over other areas and time periods (including shorter time periods for change detection), we make the workflow available in GEE. Visual inspection and comparison with other Landsat products confirmed that the pre-processing workflow was efficient and the composites are seamless and without data gaps, although some artifacts present in the source data remain. Basin-wide Landsat-7 and Landsat-8 composites are expected to facilitate both local and broad-scale ecological and biogeographical studies, species distribution modeling, and conservation planning in Amazonia.

RevDate: 2024-11-16
CmpDate: 2024-11-13

Ko G, Kim PG, Yoon BH, et al (2024)

Closha 2.0: a bio-workflow design system for massive genome data analysis on high performance cluster infrastructure.

BMC bioinformatics, 25(1):353.

BACKGROUND: The explosive growth of next-generation sequencing data has resulted in ultra-large-scale datasets and significant computational challenges. As the cost of next-generation sequencing (NGS) has decreased, the amount of genomic data has surged globally. However, the cost and complexity of the computational resources required continue to be substantial barriers to leveraging big data. A promising solution to these computational challenges is cloud computing, which provides researchers with the necessary CPUs, memory, storage, and software tools.

RESULTS: Here, we present Closha 2.0, a cloud computing service that offers a user-friendly platform for analyzing massive genomic datasets. Closha 2.0 is designed to provide a cloud-based environment that enables all genomic researchers, including those with limited or no programming experience, to easily analyze their genomic data. The new 2.0 version of Closha has more user-friendly features than the previous 1.0 version. Firstly, the workbench features a script editor that supports Python, R, and shell script programming, enabling users to write scripts and integrate them into their pipelines. This functionality is particularly useful for downstream analysis. Second, Closha 2.0 runs on containers, which execute each tool in an independent environment. This provides a stable environment and prevents dependency issues and version conflicts among tools. Additionally, users can execute each step of a pipeline individually, allowing them to test applications at each stage and adjust parameters to achieve the desired results. We also updated a high-speed data transmission tool called GBox that facilitates the rapid transfer of large datasets.

CONCLUSIONS: The analysis pipelines on Closha 2.0 are reproducible, with all analysis parameters and inputs being permanently recorded. Closha 2.0 simplifies multi-step analysis with drag-and-drop functionality and provides a user-friendly interface for genomic scientists to obtain accurate results from NGS data. Closha 2.0 is freely available at https://www.kobic.re.kr/closha2 .

RevDate: 2024-11-27
CmpDate: 2024-11-27

Aslam RW, Naz I, Shu H, et al (2024)

Multi-temporal image analysis of wetland dynamics using machine learning algorithms.

Journal of environmental management, 371:123123.

Wetlands play a crucial role in enhancing groundwater quality, mitigating natural hazards, controlling erosion, and providing essential habitats for unique flora and wildlife. Despite their significance, wetlands are facing decline in various global locations, underscoring the need for effective mapping, monitoring, and predictive modeling approaches. Recent advances in machine learning, time series earth observation data, and cloud computing have opened up new possibilities to address the challenges of large-scale wetlands mapping and dynamics forecasting. This research conducts a comprehensive analysis of wetland dynamics in the Thatta region, encompassing Haleji & Kinjhar Lake in Pakistan, and evaluates the efficacy of different classification systems. Leveraging Google Earth Engine, Landsat imagery, and various spectral indices, we assess four classification techniques to derive accurate wetland mapping results. Our findings demonstrate that Random Forest emerged as the most efficient and accurate method, achieving 87% accuracy across all time periods. Change detection analysis reveals a significant and alarming decline in Haleji & Kinjhar Lake wetlands over 1990-2020, primarily driven by agricultural expansion, urbanization, groundwater extraction, and climate change impacts like rising temperatures and reduced precipitation. If left unaddressed, this continued wetland loss could have severe implications for aquatic and terrestrial species, water and soil quality, wildlife populations, and local livelihoods. The study predicts future wetland dynamics under different scenarios - enhancing drainage for farmland conversion (10-20% increase), increasing urbanization (10-20% expansion), escalating groundwater extraction (7.2m annual decline), and climate change (up to 5 °C warming and 54% precipitation deficit by 2050). These scenarios forecast sustained long-term wetland deterioration driven by anthropogenic pressures and climate change. To guide conservation strategies, the research integrates satellite data analytics, machine learning algorithms, and spatial modeling to generate actionable insights into multifaceted wetland vulnerabilities. Findings provide a robust baseline to inform policies ensuring sustainable management and preservation of these vital ecosystems amidst escalating human and climate threats. Over 1990-2020, the Thatta region witnessed a 352.8 sq.km loss of wetlands, necessitating urgent restoration efforts to safeguard their invaluable ecosystem services.

RevDate: 2024-11-11

Wei J, Wang L, Zhou Z, et al (2024)

BloodPatrol: Revolutionizing Blood Cancer Diagnosis - Advanced Real-Time Detection Leveraging Deep Learning & Cloud Technologies.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

Cloud computing and Internet of Things (IoT) technologies are gradually becoming the technological changemakers in cancer diagnosis. Blood cancer is an aggressive disease affecting the blood, bone marrow, and lymphatic system, and its early detection is crucial for subsequent treatment. Flow cytometry has been widely studied as a commonly used method for detecting blood cancer. However, the high computation and resource consumption severely limit its practical application, especifically in regions with limited medical and computational resources. In this study, with the help of cloud computing and IoT technologies, we develop a novel blood cancer dynamic monitoring diagnostic model named BloodPatrol based on an intelligent feature weight fusion mechanism. The proposed model is capable of capturing the dual-view importance relationship between cell samples and features, greatly improving prediction accuracy and significantly surpassing previous models. Besides, benefiting from the powerful processing ability of cloud computing, BloodPatrol can run on a distributed network to efficiently process large-scale cell data, which provides immediate and scalable blood cancer diagnostic services. We have also created a cloud diagnostic platform to facilitate access to our work, the latest access link and updates are available at: https://github.com/kkkayle/BloodPatrol.

RevDate: 2024-11-18
CmpDate: 2024-11-18

Huggins DR, Phillips CL, Carlson BR, et al (2024)

The LTAR Cropland Common Experiment at R. J. Cook Agronomy Farm.

Journal of environmental quality, 53(6):839-850.

Dryland agriculture in the Inland Pacific Northwest is challenged in part by rising input costs for seed, fertilizer, and agrichemicals; threats to water quality and soil health, including soil erosion, organic matter decline, acidification, compaction, and nutrient imbalances; lack of cropping system diversity; herbicide resistance; and air quality concerns from atmospheric emissions of particulate matter and greenhouse gases. Technological advances such as rapid data acquisition, artificial intelligence, cloud computing, and robotics have helped fuel innovation and discovery but have also further complicated agricultural decision-making and research. Meeting these challenges has promoted interest in (1) supporting long-term research that enables assessment of ecosystem service trade-offs and advances sustainable and regenerative approaches to agriculture, and (2) developing coproduction research approaches that actively engage decision-makers and accelerate innovation. The R. J. Cook Agronomy Farm (CAF) Long-Term Agroecosystem Research (LTAR) site established a cropping systems experiment in 2017 that contrasts prevailing (PRV) and alternative (ALT) practices at field scales over a proposed 30-year time frame. The experimental site is on the Washington State University CAF near Pullman, WA. Cropping practices include a wheat-based cropping system with wheat (Triticum aestivum L.), canola (Brassica napus, variety napus), chickpea (Cicer arietinum), and winter pea (Pisum sativum), with winter wheat produced every third year under the ALT practices of continuous no-tillage and precision applied N, compared to the PRV practice of reduced tillage (RT) and uniformly applied agrichemicals. Biophysical measurements are made at georeferenced locations that capture field-scale spatial variability at temporal intervals that follow approved methods for each agronomic and environmental metric. Research to date is assessing spatial and temporal variations in cropping system performance (e.g., crop yield, soil health, and water and air quality) for ALT versus PRV and associated tradeoffs. Future research will explore a coproduction approach with the intent of advancing discovery, innovation, and impact through collaborative stakeholder-researcher partnerships that direct and implement research priorities.

RevDate: 2024-11-27
CmpDate: 2024-11-27

Ranjan AK, AK Gorai (2024)

Assessment of global carbon dynamics due to mining-induced forest cover loss during 2000-2019 using satellite datasets.

Journal of environmental management, 371:123271.

Mining activities significantly contribute to forest cover loss (FCL), subsequently altering global carbon dynamics and exacerbating climate change. The present study aims to estimate the contributions of mining-induced FCL to carbon sequestration loss (CSL) and carbon dioxide (CO2) emissions from 2000 to 2019 using the proxy datasets. For FCL analysis, the global FCL data at 30 m spatial resolution, developed by Hansen et al. (2013), was employed in the Google Earth Engine (GEE) cloud platform. Furthermore, for CSL and CO2 emissions assessment, Moderate Resolution Imaging Spectroradiometer (MODIS)-based Net Primary Productivity (NPP) data and Zhang and Liang (2020)-developed biomass datasets were used, respectively. The outcomes of the study exhibited approximately 16,785.90 km[2] FCL globally due to mining activities, resulting in an estimated CSL of ∼36,363.17 Gg CO2/year and CO2 emissions of ∼490,525.30 Gg CO2. Indonesia emerged as the largest contributor to mining-induced FCL, accounting for 3,622.78 km[2] of deforestation, or 21.58% of the global total. Brazil and Canada followed, with significant deforestation and CO2 emissions. The findings revealed that mining activities are a major driver of deforestation, particularly in resource-rich regions, leading to substantial environmental degradation. The relative FCL was notably high in smaller countries like Suriname and Guyana, where mining activities constituted a significant proportion of total deforestation. The present study underscores the urgent need for robust regulatory frameworks, sustainable land management practices, and coordinated international efforts to mitigate the adverse environmental impacts of mining. The findings of this study can inform policymakers and stakeholders, leading to more effective conservation strategies and benefiting society by promoting environmental sustainability and resilience against climate change.

RevDate: 2024-11-16

Tangorra FM, Buoio E, Calcante A, et al (2024)

Internet of Things (IoT): Sensors Application in Dairy Cattle Farming.

Animals : an open access journal from MDPI, 14(21):.

The expansion of dairy cattle farms and the increase in herd size have made the control and management of animals more complex, with potentially negative effects on animal welfare, health, productive/reproductive performance and consequently farm income. Precision Livestock Farming (PLF) is based on the use of sensors to monitor individual animals in real time, enabling farmers to manage their herds more efficiently and optimise their performance. The integration of sensors and devices used in PLF with the Internet of Things (IoT) technologies (edge computing, cloud computing, and machine learning) creates a network of connected objects that improve the management of individual animals through data-driven decision-making processes. This paper illustrates the main PLF technologies used in the dairy cattle sector, highlighting how the integration of sensors and devices with IoT addresses the challenges of modern dairy cattle farming, leading to improved farm management.

RevDate: 2024-11-15

Yang D, Wu J, Y He (2024)

Optimizing the Agricultural Internet of Things (IoT) with Edge Computing and Low-Altitude Platform Stations.

Sensors (Basel, Switzerland), 24(21):.

Using low-altitude platform stations (LAPSs) in the agricultural Internet of Things (IoT) enables the efficient and precise monitoring of vast and hard-to-reach areas, thereby enhancing crop management. By integrating edge computing servers into LAPSs, data can be processed directly at the edge in real time, significantly reducing latency and dependency on remote cloud servers. Motivated by these advancements, this paper explores the application of LAPSs and edge computing in the agricultural IoT. First, we introduce an LAPS-aided edge computing architecture for the agricultural IoT, in which each task is segmented into several interdependent subtasks for processing. Next, we formulate a total task processing delay minimization problem, taking into account constraints related to task dependency and priority, as well as equipment energy consumption. Then, by treating the task dependencies as directed acyclic graphs, a heuristic task processing algorithm with priority selection is developed to solve the formulated problem. Finally, the numerical results show that the proposed edge computing scheme outperforms state-of-the-art works and the local computing scheme in terms of the total task processing delay.

RevDate: 2024-11-16
CmpDate: 2024-11-09

Orro A, Geminiani GA, Sicurello F, et al (2024)

A Cloud Infrastructure for Health Monitoring in Emergency Response Scenarios.

Sensors (Basel, Switzerland), 24(21):.

Wearable devices have a significant impact on society, and recent advancements in modern sensor technologies are opening up new possibilities for healthcare applications. Continuous vital sign monitoring using Internet of Things solutions can be a crucial tool for emergency management, reducing risks in rescue operations and ensuring the safety of workers. The massive amounts of data, high network traffic, and computational demands of a typical monitoring application can be challenging to manage with traditional infrastructure. Cloud computing provides a solution with its built-in resilience and elasticity capabilities. This study presents a Cloud-based monitoring architecture for remote vital sign tracking of paramedics and medical workers through the use of a mobile wearable device. The system monitors vital signs such as electrocardiograms and breathing patterns during work sessions, and it is able to manage real-time alarm events to a personnel management center. In this study, 900 paramedics and emergency workers were monitored using wearable devices over a period of 12 months. Data from these devices were collected, processed via Cloud infrastructure, and analyzed to assess the system's reliability and scalability. The results showed a significant improvement in worker safety and operational efficiency. This study demonstrates the potential of Cloud-based systems and Internet of Things devices in enhancing emergency response efforts.

RevDate: 2024-11-16

Zhang Y, Xia G, Yu C, et al (2024)

Fault-Tolerant Scheduling Mechanism for Dynamic Edge Computing Scenarios Based on Graph Reinforcement Learning.

Sensors (Basel, Switzerland), 24(21):.

With the proliferation of Internet of Things (IoT) devices and edge nodes, edge computing has taken on much of the real-time data processing and low-latency response tasks which were previously managed by cloud computing. However, edge computing often encounters challenges such as network instability and dynamic resource variations, which can lead to task interruptions or failures. To address these issues, developing a fault-tolerant scheduling mechanism is crucial to ensure that a system continues to operate efficiently even when some nodes experience failures. In this paper, we propose an innovative fault-tolerant scheduling model based on asynchronous graph reinforcement learning. This model incorporates a deep reinforcement learning framework built upon a graph neural network, allowing it to accurately capture the complex communication relationships between computing nodes. The model generates fault-tolerant scheduling actions as output, ensuring robust performance in dynamic environments. Additionally, we introduce an asynchronous model update strategy, which enhances the model's capability of real-time dynamic scheduling through multi-threaded parallel interactions with the environment and frequent model updates via running threads. The experimental results demonstrate that the proposed method outperformed the baseline algorithms in terms of quality of service (QoS) assurance and fault-tolerant scheduling capabilities.

RevDate: 2024-11-16
CmpDate: 2024-11-09

Oliveira D, S Mafra (2024)

Implementation of an Intelligent Trap for Effective Monitoring and Control of the Aedes aegypti Mosquito.

Sensors (Basel, Switzerland), 24(21):.

Aedes aegypti is a mosquito species known for its role in transmitting dengue fever, a viral disease prevalent in tropical and subtropical regions. Recognizable by its white markings and preference for urban habitats, this mosquito breeds in standing water near human dwellings. A promising approach to combat the proliferation of mosquitoes is the use of smart traps, equipped with advanced technologies to attract, capture, and monitor them. The most significant results include 97% accuracy in detecting Aedes aegypti, 100% accuracy in identifying bees, and 90.1% accuracy in classifying butterflies in the laboratory. Field trials successfully validated and identified areas for continued improvement. The integration of technologies such as Internet of Things (IoT), cloud computing, big data, and artificial intelligence has the potential to revolutionize pest control, significantly improving mosquito monitoring and control. The application of machine learning (ML) algorithms and computer vision for the identification and classification of Aedes aegypti is a crucial part of this process. This article proposes the development of a smart trap for selective control of winged insects, combining IoT devices, high-resolution cameras, and advanced ML algorithms for insect detection and classification. The intelligent system features the YOLOv7 algorithm (You Only Look Once v7) that is capable of detecting and counting insects in real time, combined with LoRa/LoRaWan connectivity and IoT system intelligence. This adaptive approach is effective in combating Aedes aegypti mosquitoes in real time.

RevDate: 2024-11-16

Zheng H, Hou H, Tian D, et al (2024)

Evaluating the Patterns of Maize Development in the Hetao Irrigation Region Using the Sentinel-1 GRD SAR Bipolar Descriptor.

Sensors (Basel, Switzerland), 24(21):.

Assessing maize yield is critical, as it is directly influenced by the crop's growth conditions. Therefore, real-time monitoring of maize growth is necessary. Regular monitoring of maize growth indicators is essential for optimizing irrigation management and evaluating agricultural yield. However, quantifying the physical aspects of regional crop development using time-series data is a challenging task. This research was conducted at the Dengkou Experimental Station in the Hetao irrigation area, Northwest China, to develop a monitoring tool for regional maize growth parameters. The tool aimed to establish a correlation between satellite-based physical data and actual crop growth on the ground. This study utilized dual-polarization Sentinel-1A GRD SAR data, accessible via the Google Earth Engine (GEE) cloud platform. Three polarization descriptors were introduced: θc (pseudo-scattering type parameter), Hc (pseudo-scattering entropy parameter), and mc (co-polar purity parameter). Using an unsupervised clustering framework, the maize-growing area was classified into several scattering mechanism groups, and the growth characteristics of the maize crop were analyzed. The results showed that throughout the maize development cycle, the parameters θc, Hc, and mc varied within the ranges of 26.82° to 42.13°, 0.48 to 0.89, and 0.32 to 0.85, respectively. During the leaf development stage, approximately 80% of the maize sampling points were concentrated in the low-to-moderate entropy scattering zone. As the plants reached the big trumpet stage, the entire cluster shifted to the high-entropy vegetation scattering zone. Finally, at maturity, over 60% of the sampling points were located in the high-entropy distribution scattering zone. This study presents an advanced analytical tool for crop management and yield estimation by utilizing precise and high-resolution spatial and temporal data on crop growth dynamics. The tool enhances the accuracy of crop growth management across different spatial and temporal conditions.

RevDate: 2024-11-23
CmpDate: 2024-11-23

Adams MCB, Griffin C, Adams H, et al (2024)

Adapting the open-source Gen3 platform and kubernetes for the NIH HEAL IMPOWR and MIRHIQL clinical trial data commons: Customization, cloud transition, and optimization.

Journal of biomedical informatics, 159:104749.

OBJECTIVE: This study aims to provide the decision-making framework, strategies, and software used to successfully deploy the first combined chronic pain and opioid use data clinical trial data commons using the Gen3 platform.

MATERIALS AND METHODS: The approach involved adapting the open-source Gen3 platform and Kubernetes for the needs of the NIH HEAL IMPOWR and MIRHIQL networks. Key steps included customizing the Gen3 architecture, transitioning from Amazon to Google Cloud, adapting data ingestion and harmonization processes, ensuring security and compliance for the Kubernetes environment, and optimizing performance and user experience.

RESULTS: The primary result was a fully operational IMPOWR data commons built on Gen3. Key features include a modular architecture supporting diverse clinical trial data types, automated processes for data management, fine-grained access control and auditing, and researcher-friendly interfaces for data exploration and analysis.

DISCUSSION: The successful development of the Wake Forest IDEA-CC data commons represents a significant milestone for chronic pain and addiction research. Harmonized, FAIR data from diverse studies can be discovered in a secure, scalable repository. Challenges remain in long-term maintenance and governance, but the commons provides a foundation for accelerating scientific progress. Key lessons learned include the importance of engaging both technical and domain experts, the need for flexible yet robust infrastructure, and the value of building on established open-source platforms.

CONCLUSION: The WF IDEA-CC Gen3 data commons demonstrates the feasibility and value of developing a shared data infrastructure for chronic pain and opioid use research. The lessons learned can inform similar efforts in other clinical domains.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )