picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
28 Sep 2021 at 01:37
HITS:
2041
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 28 Sep 2021 at 01:37 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2021-09-27

Kumar R, Al-Turjman F, Srinivas LNB, et al (2021)

ANFIS for prediction of epidemic peak and infected cases for COVID-19 in India.

Neural computing & applications pii:6412 [Epub ahead of print].

Corona Virus Disease 2019 (COVID-19) is a continuing extensive incident globally affecting several million people's health and sometimes leading to death. The outbreak prediction and making cautious steps is the only way to prevent the spread of COVID-19. This paper presents an Adaptive Neuro-fuzzy Inference System (ANFIS)-based machine learning technique to predict the possible outbreak in India. The proposed ANFIS-based prediction system tracks the growth of epidemic based on the previous data sets fetched from cloud computing. The proposed ANFIS technique predicts the epidemic peak and COVID-19 infected cases through the cloud data sets. The ANFIS is chosen for this study as it has both numerical and linguistic knowledge, and also has ability to classify data and identify patterns. The proposed technique not only predicts the outbreak but also tracks the disease and suggests a measurable policy to manage the COVID-19 epidemic. The obtained prediction shows that the proposed technique very effectively tracks the growth of the COVID-19 epidemic. The result shows the growth of infection rate decreases at end of 2020 and also has delay epidemic peak by 40-60 days. The prediction result using the proposed ANFIS technique shows a low Mean Square Error (MSE) of 1.184 × 10-3 with an accuracy of 86%. The study provides important information for public health providers and the government to control the COVID-19 epidemic.

RevDate: 2021-09-27

Rufino Henrique PS, R Prasad (2021)

6G Networks for Next Generation of Digital TV Beyond 2030.

Wireless personal communications pii:9070 [Epub ahead of print].

This paper prosed a novel 6G QoS over the future 6G wireless architecture to offer excellent Quality of Service (QoS) for the next generation of digital TV beyond 2030. During the last 20 years, the way society used to watch and consume TV and Cinema has changed radically. The creation of the Over The Top content platforms based on Cloud Services followed by its commercial video consumption model, offering flexibility for subscribers such as n Video on Demand. Besides the new business model created, the network infrastructure and wireless technologies also permitted the streaming of high-quality TV and film formats such as High Definition, followed by the latest widespread TV standardization Ultra-High- Definition TV. Mobile Broadband services onset the possibility for consumers to watch TV or Video content anywhere at any time. However, the network infrastructure needs continuous improvement, primarily when crises, like the coronavirus disease (COVID-19) and the worldwide pandemic, creates immense network traffic congestions. The outcome of that congestion was the decrease of QoS for such multimedia services, impacting the user's experience. More power-hungry video applications are commencing to test the networks' resilience and future roadmap of 5G and Beyond 5G (B5G). For this, 6G architecture planning must be focused on offering the ultimate QoS for prosumers beyond 2030.

RevDate: 2021-09-26

Jennings MR, Turner C, Bond RR, et al (2021)

Code-free cloud computing service to facilitate rapid biomedical digital signal processing and algorithm development.

Computer methods and programs in biomedicine, 211:106398 pii:S0169-2607(21)00472-7 [Epub ahead of print].

BACKGROUND AND OBJECTIVE: Cloud computing has the ability to offload processing tasks to a remote computing resources. Presently, the majority of biomedical digital signal processing involves a ground-up approach by writing code in a variety of languages. This may reduce the time a researcher or health professional has to process data, while increasing the barrier to entry to those with little or no software development experience. In this study, we aim to provide a service capable of handling and processing biomedical data via a code-free interface. Furthermore, our solution should support multiple file formats and processing languages while saving user inputs for repeated use.

METHODS: A web interface via the Python-based Django framework was developed with the potential to shorten the time taken to create an algorithm, encourage code reuse, and democratise digital signal processing tasks for non-technical users using a code-free user interface. A user can upload data, create an algorithm and download the result. Using discrete functions and multi-lingual scripts (e.g. MATLAB or Python), the user can manipulate data rapidly in a repeatable manner. Multiple data file formats are supported by a decision-based file handler and user authentication-based storage allocation method.

RESULTS: The proposed system has been demonstrated as effective in handling multiple input data types in various programming languages, including Python and MATLAB. This, in turn, has the potential to reduce currently experienced bottlenecks in cross-platform development of bio-signal processing algorithms. The source code for this system has been made available to encourage reuse. A cloud service for digital signal processing has the ability to reduce the apparent complexity and abstract the need to understand the intricacies of signal processing.

CONCLUSION: We have introduced a web-based system capable of reducing the barrier to entry for inexperienced programmers. Furthermore, our system is reproducable and scalable for use in a variety of clinical or research fields.

RevDate: 2021-09-27

Weinstein RS, Holcomb MJ, Mo J, et al (2021)

An Ostomy Self-management Telehealth Intervention for Cancer Survivors: Technology-Related Findings From a Randomized Controlled Trial.

Journal of medical Internet research, 23(9):e26545 pii:v23i9e26545.

BACKGROUND: An Ostomy Self-management Telehealth (OSMT) intervention by nurse educators and peer ostomates can equip new ostomates with critical knowledge regarding ostomy care. A telehealth technology assessment aim was to measure telehealth engineer support requirements for telehealth technology-related (TTR) incidents encountered during OSMT intervention sessions held via a secure cloud-based videoconferencing service, Zoom for Healthcare.

OBJECTIVE: This paper examines technology-related challenges, issues, and opportunities encountered in the use of telehealth in a randomized controlled trial intervention for cancer survivors living with a permanent ostomy.

METHODS: The Arizona Telemedicine Program provided telehealth engineering support for 105 OSMT sessions, scheduled for 90 to 120 minutes each, over a 2-year period. The OSMT groups included up to 15 participants, comprising 4-6 ostomates, 4-6 peer ostomates, 2 nurse educators, and 1 telehealth engineer. OSMT-session TTR incidents were recorded contemporaneously in detailed notes by the research staff. TTR incidents were categorized and tallied.

RESULTS: A total of 97.1% (102/105) OSMT sessions were completed as scheduled. In total, 3 OSMT sessions were not held owing to non-technology-related reasons. Of the 93 ostomates who participated in OSMT sessions, 80 (86%) completed their OSMT curriculum. TTR incidents occurred in 36.3% (37/102) of the completed sessions with varying disruptive impacts. No sessions were canceled or rescheduled because of TTR incidents. Disruptions from TTR incidents were minimized by following the TTR incident prevention and incident response plans.

CONCLUSIONS: Telehealth videoconferencing technology can enable ostomates to participate in ostomy self-management education by incorporating dedicated telehealth engineering support. Potentially, OSMT greatly expands the availability of ostomy self-management education for new ostomates.

TRIAL REGISTRATION: ClinicalTrials.gov NCT02974634; https://clinicaltrials.gov/ct2/show/NCT02974634.

RevDate: 2021-09-24

Setiani P, Devianto LA, F Ramdani (2021)

Rapid estimation of CO2 emissions from forest fire events using cloud-based computation of google earth engine.

Environmental monitoring and assessment, 193(10):669.

One of the main sources of greenhouse gases is forest fire, with carbon dioxide as its main constituent. With increasing global surface temperatures, the probability of forest fire events also increases. A method that enables rapid quantification of emissions is even more necessary to estimate the environmental impact. This study introduces the application of the Google Earth Engine platform to monitor burned areas in forest fire events in Mount Arjuno, Indonesia, during the 2016-2019 period, using Landsat-8 and Sentinel-2 satellite imageries. The events particularly affected grassland and tropical forest areas, as well as a fraction of agricultural areas, with a total estimated emission of 2.5 × 103 tCO2/km2 burned area. Higher carbon dioxide emissions were also observed, consistent with the higher local surface temperature as well as the CO total column mixing ratio average retrieved from Sentinel-5 p Tropospheric Monitoring Instrument during the period of analysis.

RevDate: 2021-09-23

Alharbi A, MD Abdur Rahman (2021)

Review of Recent Technologies for Tackling COVID-19.

SN computer science, 2(6):460.

The current pandemic caused by the COVID-19 virus requires more effort, experience, and science-sharing to overcome the damage caused by the pathogen. The fast and wide human-to-human transmission of the COVID-19 virus demands a significant role of the newest technologies in the form of local and global computing and information sharing, data privacy, and accurate tests. The advancements of deep neural networks, cloud computing solutions, blockchain technology, and beyond 5G (B5G) communication have contributed to the better management of the COVID-19 impacts on society. This paper reviews recent attempts to tackle the COVID-19 situation using these technological advancements.

RevDate: 2021-09-20

Dibiasi L, Risi M, Tortora G, et al (2021)

A Cloud Approach for Melanoma Detection based on Deep Learning Networks.

IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].

In the era of digitized images, the goal is to be able to extract information from them and create new knowledge thanks to the use of Computer Vision techniques, Machine Learning and Deep Learning. This allows their use for early diagnosis and subsequent determination of the treatment of many pathologies. In the specific case treated here, deep neural networks are used in the dermatological field to distinguish between melanoma and non-melanoma images. In this work we have underlined two essential points of melanoma detection research. The first aspect taken into consideration is how even a simple modification of the parameters in the dataset determines a change of the accuracy of the classifiers, while working on the same original dataset. The second point is the need to have a system architecture that can be more flexible in updating the training datasets for the classification of this pathology. In this context, the proposed architecture reserves the goal of developing and implementing a hybrid architecture based on Cloud, Fog and Edge Computing in order to provide a Melanoma Detection service based on clinical and/or dermoscopic images. At the same time, this architecture must be able to interface with the amount of data to be analyzed by reducing the running time of the necessary computational operations. This has been highlighted with experiments carried out on a single machine and on different distribution systems, highlighting how a distributed approach guarantees the achievement of an output in a much more acceptable time without the need to fully rely on data scientists skills.

RevDate: 2021-09-21

Qawqzeh Y, Alharbi MT, Jaradat A, et al (2021)

A review of swarm intelligence algorithms deployment for scheduling and optimization in cloud computing environments.

PeerJ. Computer science, 7:e696.

Background: This review focuses on reviewing the recent publications of swarm intelligence algorithms (particle swarm optimization (PSO), ant colony optimization (ACO), artificial bee colony (ABC), and the firefly algorithm (FA)) in scheduling and optimization problems. Swarm intelligence (SI) can be described as the intelligent behavior of natural living animals, fishes, and insects. In fact, it is based on agent groups or populations in which they have a reliable connection among them and with their environment. Inside such a group or population, each agent (member) performs according to certain rules that make it capable of maximizing the overall utility of that certain group or population. It can be described as a collective intelligence among self-organized members in certain group or population. In fact, biology inspired many researchers to mimic the behavior of certain natural swarms (birds, animals, or insects) to solve some computational problems effectively.

Methodology: SI techniques were utilized in cloud computing environment seeking optimum scheduling strategies. Hence, the most recent publications (2015-2021) that belongs to SI algorithms are reviewed and summarized.

Results: It is clear that the number of algorithms for cloud computing optimization is increasing rapidly. The number of PSO, ACO, ABC, and FA related journal papers has been visibility increased. However, it is noticeably that many recently emerging algorithms were emerged based on the amendment on the original SI algorithms especially the PSO algorithm.

Conclusions: The major intention of this work is to motivate interested researchers to develop and innovate new SI-based solutions that can handle complex and multi-objective computational problems.

RevDate: 2021-09-21

Ali O, Ishak MK, MKL Bhatti (2021)

Emerging IoT domains, current standings and open research challenges: a review.

PeerJ. Computer science, 7:e659.

Over the last decade, the Internet of Things (IoT) domain has grown dramatically, from ultra-low-power hardware design to cloud-based solutions, and now, with the rise of 5G technology, a new horizon for edge computing on IoT devices will be introduced. A wide range of communication technologies has steadily evolved in recent years, representing a diverse range of domain areas and communication specifications. Because of the heterogeneity of technology and interconnectivity, the true realisation of the IoT ecosystem is currently hampered by multiple dynamic integration challenges. In this context, several emerging IoT domains necessitate a complete re-modeling, design, and standardisation from the ground up in order to achieve seamless IoT ecosystem integration. The Internet of Nano-Things (IoNT), Internet of Space-Things (IoST), Internet of Underwater-Things (IoUT) and Social Internet of Things (SIoT) are investigated in this paper with a broad future scope based on their integration and ability to source other IoT domains by highlighting their application domains, state-of-the-art research, and open challenges. To the best of our knowledge, there is little or no information on the current state of these ecosystems, which is the motivating factor behind this article. Finally, the paper summarises the integration of these ecosystems with current IoT domains and suggests future directions for overcoming the challenges.

RevDate: 2021-09-18

Fletcher MD (2021)

Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners?.

Frontiers in neuroscience, 15:723877.

Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This "electro-haptic stimulation" improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.

RevDate: 2021-09-19

Andleeb S, Abbasi WA, Ghulam Mustafa R, et al (2021)

ESIDE: A computationally intelligent method to identify earthworm species (E. fetida) from digital images: Application in taxonomy.

PloS one, 16(9):e0255674.

Earthworms (Crassiclitellata) being ecosystem engineers significantly affect the physical, chemical, and biological properties of the soil by recycling organic material, increasing nutrient availability, and improving soil structure. The efficiency of earthworms in ecology varies along with species. Therefore, the role of taxonomy in earthworm study is significant. The taxonomy of earthworms cannot reliably be established through morphological characteristics because the small and simple body plan of the earthworm does not have anatomical complex and highly specialized structures. Recently, molecular techniques have been adopted to accurately classify the earthworm species but these techniques are time-consuming and costly. To combat this issue, in this study, we propose a machine learning-based earthworm species identification model that uses digital images of earthworms. We performed a stringent performance evaluation not only through 10-fold cross-validation and on an external validation dataset but also in real settings by involving an experienced taxonomist. In all the evaluation settings, our proposed model has given state-of-the-art performance and justified its use to aid earthworm taxonomy studies. We made this model openly accessible through a cloud-based webserver and python code available at https://sites.google.com/view/wajidarshad/software and https://github.com/wajidarshad/ESIDE.

RevDate: 2021-09-16

Gxokwe S, Dube T, D Mazvimavi (2021)

Leveraging Google Earth Engine platform to characterize and map small seasonal wetlands in the semi-arid environments of South Africa.

The Science of the total environment, 803:150139 pii:S0048-9697(21)05214-1 [Epub ahead of print].

Although significant scientific research strides have been made in mapping the spatial extents and ecohydrological dynamics of wetlands in semi-arid environments, the focus on small wetlands remains a challenge. This is due to the sensing characteristics of remote sensing platforms and lack of robust data processing techniques. Advancements in data analytic tools, such as the introduction of Google Earth Engine (GEE) platform provides unique opportunities for improved assessment of small and scattered wetlands. This study thus assessed the capabilities of GEE cloud-computing platform in characterising small seasonal flooded wetlands, using the new generation Sentinel 2 data from 2016 to 2020. Specifically, the study assessed the spectral separability of different land cover classes for two different wetlands detected, using Sentinel-2 multi-year composite water and vegetation indices and to identify the most suitable GEE machine learning algorithm for accurately detecting and mapping semi-arid seasonal wetlands. This was achieved using the object based Random Forest (RF), Support Vector Machine (SVM), Classification and Regression Tree (CART) and Naïve Bayes (NB) advanced algorithms in GEE. The results demonstrated the capabilities of using the GEE platform to characterize wetlands with acceptable accuracy. All algorithms showed superiority, in mapping the two wetlands except for the NB method, which had lowest overall classification accuracy. These findings underscore the relevance of the GEE platform, Sentinel-2 data and advanced algorithms in characterizing small and seasonal semi-arid wetlands.

RevDate: 2021-09-16

Nasser N, Emad-Ul-Haq Q, Imran M, et al (2021)

A smart healthcare framework for detection and monitoring of COVID-19 using IoT and cloud computing.

Neural computing & applications [Epub ahead of print].

Coronavirus (COVID-19) is a very contagious infection that has drawn the world's attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data's intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system's robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion.

RevDate: 2021-09-24

Sood SK, KS Rawat (2021)

A fog assisted intelligent framework based on cyber physical system for safe evacuation in panic situations.

Computer communications, 178:297-306.

In the current scenario of the COVID-19 pandemic and worldwide health emergency, one of the major challenges is to identify and predict the panic health of persons. The management of panic health and on-time evacuation prevents COVID-19 infection incidences in educational institutions and public places. Therefore, a system is required to predict the infection and suggests a safe evacuation path to people that control panic scenarios with mortality. In this paper, a fog-assisted cyber physical system is introduced to control panic attacks and COVID-19 infection risk in public places. The proposed model uses the concept of physical and cyber space. The physical space helps in real time data collection and transmission of the alert generation to the stakeholders. Cyberspace consists of two spaces, fog space, and cloud-space. The fog-space facilitates panic health and COVID-19 symptoms determination with alert generation for risk-affected areas. Cloud space monitors and predicts the person's panic health and symptoms using the SARIMA model. Furthermore, it also identifies risk-prone regions in the affected place using Geographical Population Analysis. The performance evaluation acknowledges the efficiency related to panic health determination and prediction based on the SARIMA with risks mapping accuracy. The proposed system provides an efficient on time evacuation with priority from risk-affected places that protect people from attacks due to panic and infection caused by COVID-19.

RevDate: 2021-09-13

Lin Z, Zou J, Liu S, et al (2021)

Correction to "A Cloud Computing Platform for Scalable Relative and Absolute Binding Free Energy Prediction: New Opportunities and Challenges for Drug Discovery".

RevDate: 2021-09-14

Sang GM, Xu L, P de Vrieze (2021)

A Predictive Maintenance Model for Flexible Manufacturing in the Context of Industry 4.0.

Frontiers in big data, 4:663466.

The Industry 4.0 paradigm is the focus of modern manufacturing system design. The integration of cutting-edge technologies such as the Internet of things, cyber-physical systems, big data analytics, and cloud computing requires a flexible platform supporting the effective optimization of manufacturing-related processes, e.g., predictive maintenance. Existing predictive maintenance studies generally focus on either a predictive model without considering the maintenance decisions or maintenance optimizations based on the degradation models of the known system. To address this, we propose PMMI 4.0, a Predictive Maintenance Model for Industry 4.0, which utilizes a newly proposed solution PMS4MMC for supporting an optimized maintenance schedule plan for multiple machine components driven by a data-driven LSTM model for RUL (remaining useful life) estimation. The effectiveness of the proposed solution is demonstrated using a real-world industrial case with related data. The results showed the validity and applicability of this work.

RevDate: 2021-09-14

Ahmadi Z, Haghi Kashani M, Nikravan M, et al (2021)

Fog-based healthcare systems: A systematic review.

Multimedia tools and applications [Epub ahead of print].

The healthcare system aims to provide a reliable and organized solution to enhance the health of human society. Studying the history of patients can help physicians to consider patients' needs in healthcare system designing and offering service, which leads to an increase in patient satisfaction. Therefore, healthcare is becoming a growing contesting market. With this significant growth in healthcare systems, such challenges as huge data volume, response time, latency, and security vulnerability are raised. Therefore, fog computing, as a well-known distributed architecture, could help to solve such challenges. In fog computing architecture, processing components are placed between the end devices and cloud components, and they execute applications. This architecture is suitable for such applications as healthcare systems that need a real-time response and low latency. In this paper, a systematic review of available approaches in the field of fog-based healthcare systems is proposed; the challenges of its application in healthcare are explored, classified, and discussed. First, the fog computing approaches in healthcare are categorized into three main classes: communication, application, and resource/service. Then, they are discussed and compared based on their tools, evaluation methods, and evaluation metrics. Finally, based on observations, some open issues and challenges are highlighted for further studies in fog-based healthcare.

RevDate: 2021-09-14

Kolak M, Li X, Lin Q, et al (2021)

The US COVID Atlas: A dynamic cyberinfrastructure surveillance system for interactive exploration of the pandemic.

Transactions in GIS : TG, 25(4):1741-1765.

Distributed spatial infrastructures leveraging cloud computing technologies can tackle issues of disparate data sources and address the need for data-driven knowledge discovery and more sophisticated spatial analysis central to the COVID-19 pandemic. We implement a new, open source spatial middleware component (libgeoda) and system design to scale development quickly to effectively meet the need for surveilling county-level metrics in a rapidly changing pandemic landscape. We incorporate, wrangle, and analyze multiple data streams from volunteered and crowdsourced environments to leverage multiple data perspectives. We integrate explorative spatial data analysis (ESDA) and statistical hotspot standards to detect infectious disease clusters in real time, building on decades of research in GIScience and spatial statistics. We scale the computational infrastructure to provide equitable access to data and insights across the entire USA, demanding a basic but high-quality standard of ESDA techniques. Finally, we engage a research coalition and incorporate principles of user-centered design to ground the direction and design of Atlas application development.

RevDate: 2021-09-13

Gómez D, Romero J, López P, et al (2021)

Cloud architecture for electronic health record systems interoperability.

Technology and health care : official journal of the European Society for Engineering and Medicine pii:THC212806 [Epub ahead of print].

BACKGROUND: Current Electronic Health Record (EHR) systems are built using different data representation and information models, which makes difficult achieving information exchange.

OBJECTIVE: Our aim was to propose a scalable architecture that allows the integration of information from different EHR systems.

METHODS: A cloud-based EHR interoperable architecture is proposed through the standardization and integration of patient electronic health records. The data is stored in a cloud repository with high availability features. Stakeholders can retrieve the patient EHR by requesting only to the integrated data repository. The OpenEHR two-level approach is applied according to the HL7-FHIR standards. We validated our architecture by comparing it with 5 different works (CHISTAR, ARIEN, DIRAYA, LLPHR and INEHRIS) using a set of selected axes and a scoring method.

RESULTS: The problem was reduced to a single point of communication between each EHR system and the integrated data repository. By combining cloud computing paradigm with selected health informatics standards, we obtained a generic and scalable architecture that complies 100% with interoperability requisites according to the evaluation framework applied.

CONCLUSIONS: The architecture allowed the integration of several EHR systems, adapting them with the use of standards and ensuring the availability thanks to cloud computing features.

RevDate: 2021-09-14

Pang J, Bachmatiuk A, Yang F, et al (2021)

Applications of Carbon Nanotubes in the Internet of Things Era.

Nano-micro letters, 13(1):191.

The post-Moore's era has boosted the progress in carbon nanotube-based transistors. Indeed, the 5G communication and cloud computing stimulate the research in applications of carbon nanotubes in electronic devices. In this perspective, we deliver the readers with the latest trends in carbon nanotube research, including high-frequency transistors, biomedical sensors and actuators, brain-machine interfaces, and flexible logic devices and energy storages. Future opportunities are given for calling on scientists and engineers into the emerging topics.

RevDate: 2021-09-10

Grzesik P, Augustyn DR, Wyciślik Ł, et al (2021)

Serverless computing in omics data analysis and integration.

Briefings in bioinformatics pii:6367629 [Epub ahead of print].

A comprehensive analysis of omics data can require vast computational resources and access to varied data sources that must be integrated into complex, multi-step analysis pipelines. Execution of many such analyses can be accelerated by applying the cloud computing paradigm, which provides scalable resources for storing data of different types and parallelizing data analysis computations. Moreover, these resources can be reused for different multi-omics analysis scenarios. Traditionally, developers are required to manage a cloud platform's underlying infrastructure, configuration, maintenance and capacity planning. The serverless computing paradigm simplifies these operations by automatically allocating and maintaining both servers and virtual machines, as required for analysis tasks. This paradigm offers highly parallel execution and high scalability without manual management of the underlying infrastructure, freeing developers to focus on operational logic. This paper reviews serverless solutions in bioinformatics and evaluates their usage in omics data analysis and integration. We start by reviewing the application of the cloud computing model to a multi-omics data analysis and exposing some shortcomings of the early approaches. We then introduce the serverless computing paradigm and show its applicability for performing an integrative analysis of multiple omics data sources in the context of the COVID-19 pandemic.

RevDate: 2021-09-14
CmpDate: 2021-09-13

Mateo-Fornés J, Pagès-Bernaus A, Plà-Aragonés LM, et al (2021)

An Internet of Things Platform Based on Microservices and Cloud Paradigms for Livestock.

Sensors (Basel, Switzerland), 21(17):.

With the growing adoption of the Internet of Things (IoT) technology in the agricultural sector, smart devices are becoming more prevalent. The availability of new, timely, and precise data offers a great opportunity to develop advanced analytical models. Therefore, the platform used to deliver new developments to the final user is a key enabler for adopting IoT technology. This work presents a generic design of a software platform based on the cloud and implemented using microservices to facilitate the use of predictive or prescriptive analytics under different IoT scenarios. Several technologies are combined to comply with the essential features-scalability, portability, interoperability, and usability-that the platform must consider to assist decision-making in agricultural 4.0 contexts. The platform is prepared to integrate new sensor devices, perform data operations, integrate several data sources, transfer complex statistical model developments seamlessly, and provide a user-friendly graphical interface. The proposed software architecture is implemented with open-source technologies and validated in a smart farming scenario. The growth of a batch of pigs at the fattening stage is estimated from the data provided by a level sensor installed in the silo that stores the feed from which the animals are fed. With this application, we demonstrate how farmers can monitor the weight distribution and receive alarms when high deviations happen.

RevDate: 2021-09-13
CmpDate: 2021-09-13

Kalyani Y, R Collier (2021)

A Systematic Survey on the Role of Cloud, Fog, and Edge Computing Combination in Smart Agriculture.

Sensors (Basel, Switzerland), 21(17):.

Cloud Computing is a well-established paradigm for building service-centric systems. However, ultra-low latency, high bandwidth, security, and real-time analytics are limitations in Cloud Computing when analysing and providing results for a large amount of data. Fog and Edge Computing offer solutions to the limitations of Cloud Computing. The number of agricultural domain applications that use the combination of Cloud, Fog, and Edge is increasing in the last few decades. This article aims to provide a systematic literature review of current works that have been done in Cloud, Fog, and Edge Computing applications in the smart agriculture domain between 2015 and up-to-date. The key objective of this review is to identify all relevant research on new computing paradigms with smart agriculture and propose a new architecture model with the combinations of Cloud-Fog-Edge. Furthermore, it also analyses and examines the agricultural application domains, research approaches, and the application of used combinations. Moreover, this survey discusses the components used in the architecture models and briefly explores the communication protocols used to interact from one layer to another. Finally, the challenges of smart agriculture and future research directions are briefly pointed out in this article.

RevDate: 2021-09-13
CmpDate: 2021-09-13

Stan RG, Băjenaru L, Negru C, et al (2021)

Evaluation of Task Scheduling Algorithms in Heterogeneous Computing Environments.

Sensors (Basel, Switzerland), 21(17):.

This work establishes a set of methodologies to evaluate the performance of any task scheduling policy in heterogeneous computing contexts. We formally state a scheduling model for hybrid edge-cloud computing ecosystems and conduct simulation-based experiments on large workloads. In addition to the conventional cloud datacenters, we consider edge datacenters comprising smartphone and Raspberry Pi edge devices, which are battery powered. We define realistic capacities of the computational resources. Once a schedule is found, the various task demands can or cannot be fulfilled by the resource capacities. We build a scheduling and evaluation framework and measure typical scheduling metrics such as mean waiting time, mean turnaround time, makespan, throughput on the Round-Robin, Shortest Job First, Min-Min and Max-Min scheduling schemes. Our analysis and results show that the state-of-the-art independent task scheduling algorithms suffer from performance degradation in terms of significant task failures and nonoptimal resource utilization of datacenters in heterogeneous edge-cloud mediums in comparison to cloud-only mediums. In particular, for large sets of tasks, due to low battery or limited memory, more than 25% of tasks fail to execute for each scheduling scheme.

RevDate: 2021-09-13
CmpDate: 2021-09-13

Resende JS, Magalhães L, Brandão A, et al (2021)

Towards a Modular On-Premise Approach for Data Sharing.

Sensors (Basel, Switzerland), 21(17):.

The growing demand for everyday data insights drives the pursuit of more sophisticated infrastructures and artificial intelligence algorithms. When combined with the growing number of interconnected devices, this originates concerns about scalability and privacy. The main problem is that devices can detect the environment and generate large volumes of possibly identifiable data. Public cloud-based technologies have been proposed as a solution, due to their high availability and low entry costs. However, there are growing concerns regarding data privacy, especially with the introduction of the new General Data Protection Regulation, due to the inherent lack of control caused by using off-premise computational resources on which public cloud belongs. Users have no control over the data uploaded to such services as the cloud, which increases the uncontrolled distribution of information to third parties. This work aims to provide a modular approach that uses cloud-of-clouds to store persistent data and reduce upfront costs while allowing information to remain private and under users' control. In addition to storage, this work also extends focus on usability modules that enable data sharing. Any user can securely share and analyze/compute the uploaded data using private computing without revealing private data. This private computation can be training machine learning (ML) models. To achieve this, we use a combination of state-of-the-art technologies, such as MultiParty Computation (MPC) and K-anonymization to produce a complete system with intrinsic privacy properties.

RevDate: 2021-09-14
CmpDate: 2021-09-13

Mutichiro B, Tran MN, YH Kim (2021)

QoS-Based Service-Time Scheduling in the IoT-Edge Cloud.

Sensors (Basel, Switzerland), 21(17):.

In edge computing, scheduling heterogeneous workloads with diverse resource requirements is challenging. Besides limited resources, the servers may be overwhelmed with computational tasks, resulting in lengthy task queues and congestion occasioned by unusual network traffic patterns. Additionally, Internet of Things (IoT)/Edge applications have different characteristics coupled with performance requirements, which become determinants if most edge applications can both satisfy deadlines and each user's QoS requirements. This study aims to address these restrictions by proposing a mechanism that improves the cluster resource utilization and Quality of Service (QoS) in an edge cloud cluster in terms of service time. Containerization can provide a way to improve the performance of the IoT-Edge cloud by factoring in task dependencies and heterogeneous application resource demands. In this paper, we propose STaSA, a service time aware scheduler for the edge environment. The algorithm automatically assigns requests onto different processing nodes and then schedules their execution under real-time constraints, thus minimizing the number of QoS violations. The effectiveness of our scheduling model is demonstrated through implementation on KubeEdge, a container orchestration platform based on Kubernetes. Experimental results show significantly fewer violations in QoS during scheduling and improved performance compared to the state of the art.

RevDate: 2021-09-10
CmpDate: 2021-09-10

Camargo MD, Silveira DT, Lazzari DD, et al (2021)

Nursing Activities Score: trajectory of the instrument from paper to cloud in a university hospital.

Revista da Escola de Enfermagem da U S P, 55:e20200233 pii:S0080-62342021000100531.

OBJECTIVE: To report the process of organization and construction of an information technology structure named Nursing Activities Score (NAS) Cloud Technology®.

METHOD: This project was based on the life cycle theory and has enabled the development of technological production through software engineering.

RESULTS: The NAS Cloud Technology® was developed for remote and collaborative access on a website hosted by Google Sites® and protected in a business environment by the certified security and data protection devices Health Insurance Portability and Accountability Act (HIPPA). In 2015, this system received more than 10.000 submissions/month, totaling 12 care units for critical patients covered by the information technology structure, circa 200 nurses per day involved in the collection and hundreds of daily submissions, integrating the complete transition from paper to cloud.

CONCLUSION: The development of NAS Cloud Technology® system has enabled the use of technology as a facilitating means for the use of Nursing care data, providing tools for decision-making on the nursing personnel sizing required for the care demands in the inpatient care units. The potential of cloud structures stands out due to their possibility of innovation, as well as low-cost access and high replicability of the information system.

RevDate: 2021-09-09

Luo X, Feng L, Xun H, et al (2021)

Rinegan: A Scalable Image Processing Architecture for Large Scale Surveillance Applications.

Frontiers in neurorobotics, 15:648101.

Image processing is widely used in intelligent robots, significantly improving the surveillance capabilities of smart buildings, industrial parks, and border ports. However, relying on the camera installed in a single robot is not enough since it only provides a narrow field of view as well as limited processing performance. Specially, a target person such as the suspect may appear anywhere and tracking the suspect in such a large-scale scene requires cooperation between fixed cameras and patrol robots. This induces a significant surge in demand for data, computing resources, as well as networking infrastructures. In this work, we develop a scalable architecture to optimize image processing efficacy and response rate for visual ability. In this architecture, the lightweight pre-process and object detection functions are deployed on the gateway-side to minimize the bandwidth consumption. Cloud-side servers receive solely the recognized data rather than entire image or video streams to identify specific suspect. Then the cloud-side sends the information to the robot, and the robot completes the corresponding tracking task. All these functions are implemented and orchestrated based on micro-service architecture to improve the flexibility. We implement a prototype system, called Rinegan, and evaluate it in an in-lab testing environment. The result shows that Rinegan is able to improve the effectiveness and efficacy of image processing.

RevDate: 2021-09-08

Shan B, Pu Y, Chen B, et al (2021)

New Technologies' Commercialization: The Roles of the Leader's Emotion and Incubation Support.

Frontiers in psychology, 12:710122.

New technologies, such as brain-computer interfaces technology, advanced artificial intelligence, cloud computing, and virtual reality technology, have a strong influence on our daily activities. The application and commercialization of these technologies are prevailing globally, such as distance education, health monitoring, smart home devices, and robots. However, we still know little about the roles of individual emotion and the external environment on the commercialization of these new technologies. Therefore, we focus on the emotional factor of the leader, which is their passion for work, and discuss its effect on technology commercialization. We also analyzed the moderating role of incubation support in the relationship between the leader's emotion and technology commercialization. The results contribute to the application of emotion in improving the commercialization of new technologies.

RevDate: 2021-09-24

Wang C, Qin J, Qu C, et al (2021)

A smart municipal waste management system based on deep-learning and Internet of Things.

Waste management (New York, N.Y.), 135:20-29 pii:S0956-053X(21)00462-1 [Epub ahead of print].

A proof-of-concept municipal waste management system was proposed to reduce the cost of waste classification, monitoring and collection. In this system, we utilize the deep learning-based classifier and cloud computing technique to realize high accuracy waste classification at the beginning of garbage collection. To facilitate the subsequent waste disposal, we subdivide recyclable waste into plastic, glass, paper or cardboard, metal, fabric and the other recyclable waste, a total of six categories. Deep-learning convolution neural networks (CNN) were applied to realize the garbage classification task. Here, we investigate seven state-of-the-art CNNs and data pre-processing methods for waste classification, whose accuracies of nine categories range from 91.9 to 94.6% in the validation set. Among these networks, MobileNetV3 has a high classification accuracy (94.26%), a small storage size (49.5 MB) and the shortest running time (261.7 ms). Moreover, the Internet of Things (IoT) devices which implement information exchange between waste containers and waste management center are designed to monitor the overall amount of waste produced in this area and the operating state of any waste container via a set of sensors. According to monitoring information, the waste management center can schedule adaptive equipment deployment and maintenance, waste collection and vehicle routing plans, which serves as an essential part of a successful municipal waste management system.

RevDate: 2021-09-10

Bellal Z, Nour B, S Mastorakis (2021)

CoxNet: A Computation Reuse Architecture at the Edge.

IEEE transactions on green communications and networking, 5(2):765-777.

In recent years, edge computing has emerged as an effective solution to extend cloud computing and satisfy the demand of applications for low latency. However, with today's explosion of innovative applications (e.g., augmented reality, natural language processing, virtual reality), processing services for mobile and smart devices have become computation-intensive, consisting of multiple interconnected computations. This coupled with the need for delay-sensitivity and high quality of service put massive pressure on edge servers. Meanwhile, tasks invoking these services may involve similar inputs that could lead to the same output. In this paper, we present CoxNet, an efficient computation reuse architecture for edge computing. CoxNet enables edge servers to reuse previous computations while scheduling dependent incoming computations. We provide an analytical model for computation reuse joined with dependent task offloading and design a novel computing offloading scheduling scheme. We also evaluate the efficiency and effectiveness of CoxNet via synthetic and real-world datasets. Our results show that CoxNet is able to reduce the task execution time up to 66% based on a synthetic dataset and up to 50% based on a real-world dataset.

RevDate: 2021-08-31

Peechara RR, S V (2021)

A chaos theory inspired, asynchronous two-way encryption mechanism for cloud computing.

PeerJ. Computer science, 7:e628.

Data exchange over the Internet and other access channels is on the rise, leads to the insecurity of consequences. Many experiments have been conducted to investigate time-efficient and high-randomized encryption methods for the data. The latest studies, however, have still been debated because of different factors. The study outcomes do not yield completely random keys for encryption methods that are longer than this. Prominent repetition makes the processes predictable and susceptible to assaults. Furthermore, recently generated keys need recent algorithms to run at a high volume of transactional data successfully. In this article, the proposed solutions to these two critical issues are presented. In the beginning, one must use the chaotic series of events for generating keys is sufficient to obtain a high degree of randomness. Moreover, this work also proposes a novel and non-traditional validation test to determine the true randomness of the keys produced from a correlation algorithm. An approximate 100% probability of the vital phase over almost infinitely long-time intervals minimizes the algorithms' complexity for the higher volume of data security. It is suggested that these algorithms are mainly intended for cloud-based transactions. Data volume is potentially higher and extremely changeable 3% to 4% of the improvement in data transmission time with suggested algorithms. This research has the potential to improve communication systems over ten years by unblocking decades-long bottlenecks.

RevDate: 2021-08-31

Duan L, L Da Xu (2021)

Data Analytics in Industry 4.0: A Survey.

Information systems frontiers : a journal of research and innovation [Epub ahead of print].

Industry 4.0 is the fourth industrial revolution for decentralized production through shared facilities to achieve on-demand manufacturing and resource efficiency. It evolves from Industry 3.0 which focuses on routine operation. Data analytics is the set of techniques focus on gain actionable insight to make smart decisions from a massive amount of data. As the performance of routine operation can be improved by smart decisions and smart decisions need the support from routine operation to collect relevant data, there is an increasing amount of research effort in the merge between Industry 4.0 and data analytics. To better understand current research efforts, hot topics, and tending topics on this critical intersection, the basic concepts in Industry 4.0 and data analytics are introduced first. Then the merge between them is decomposed into three components: industry sectors, cyber-physical systems, and analytic methods. Joint research efforts on different intersections with different components are studied and discussed. Finally, a systematic literature review on the interaction between Industry 4.0 and data analytics is conducted to understand the existing research focus and trend.

RevDate: 2021-08-30

Long E, Chen J, Wu X, et al (2020)

Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing.

NPJ digital medicine, 3(1):112.

A challenge of chronic diseases that remains to be solved is how to liberate patients and medical resources from the burdens of long-term monitoring and periodic visits. Precise management based on artificial intelligence (AI) holds great promise; however, a clinical application that fully integrates prediction and telehealth computing has not been achieved, and further efforts are required to validate its real-world benefits. Taking congenital cataract as a representative, we used Bayesian and deep-learning algorithms to create CC-Guardian, an AI agent that incorporates individualized prediction and scheduling, and intelligent telehealth follow-up computing. Our agent exhibits high sensitivity and specificity in both internal and multi-resource validation. We integrate our agent with a web-based smartphone app and prototype a prediction-telehealth cloud platform to support our intelligent follow-up system. We then conduct a retrospective self-controlled test validating that our system not only accurately detects and addresses complications at earlier stages, but also reduces the socioeconomic burdens compared to conventional methods. This study represents a pioneering step in applying AI to achieve real medical benefits and demonstrates a novel strategy for the effective management of chronic diseases.

RevDate: 2021-08-31
CmpDate: 2021-08-31

Rodero C, Olmedo E, Bardaji R, et al (2021)

New Radiometric Approaches to Compute Underwater Irradiances: Potential Applications for High-Resolution and Citizen Science-Based Water Quality Monitoring Programs.

Sensors (Basel, Switzerland), 21(16):.

Measuring the diffuse attenuation coefficient (Kd) allows for monitoring the water body's environmental status. This parameter is of particular interest in water quality monitoring programs because it quantifies the presence of light and the euphotic zone's depth. Citizen scientists can meaningfully contribute by monitoring water quality, complementing traditional methods by reducing monitoring costs and significantly improving data coverage, empowering and supporting decision-making. However, the quality of the acquisition of in situ underwater irradiance measurements has some limitations, especially in areas where stratification phenomena occur in the first meters of depth. This vertical layering introduces a gradient of properties in the vertical direction, affecting the associated Kd. To detect and characterize these variations of Kd in the water column, it needs a system of optical sensors, ideally placed in a range of a few cm, improving the low vertical accuracy. Despite that, the problem of self-shading on the instrumentation becomes critical. Here, we introduce a new concept that aims to improve the vertical accuracy of the irradiance measurements: the underwater annular irradiance (Ea). This new concept consists of measuring the irradiance in an annular-shaped distribution. We first compute the optimal annular angle that avoids self-shading and maximizes the light captured by the sensors. Second, we use different scenarios of water types, solar zenith angle, and cloud coverage to assess the robustness of the corresponding diffuse attenuation coefficient, Ka. Finally, we derive empirical functions for computing Kd from Ka. This new concept opens the possibility to a new generation of optical sensors in an annular-shaped distribution which is expected to (a) increase the vertical resolution of the irradiance measurements and (b) be easy to deploy and maintain and thus to be more suitable for citizen scientists.

RevDate: 2021-08-31

Lopez-Arevalo I, Gonzalez-Compean JL, Hinojosa-Tijerina M, et al (2021)

A WoT-Based Method for Creating Digital Sentinel Twins of IoT Devices.

Sensors (Basel, Switzerland), 21(16):.

The data produced by sensors of IoT devices are becoming keystones for organizations to conduct critical decision-making processes. However, delivering information to these processes in real-time represents two challenges for the organizations: the first one is achieving a constant dataflow from IoT to the cloud and the second one is enabling decision-making processes to retrieve data from dataflows in real-time. This paper presents a cloud-based Web of Things method for creating digital twins of IoT devices (named sentinels).The novelty of the proposed approach is that sentinels create an abstract window for decision-making processes to: (a) find data (e.g., properties, events, and data from sensors of IoT devices) or (b) invoke functions (e.g., actions and tasks) from physical devices (PD), as well as from virtual devices (VD). In this approach, the applications and services of decision-making processes deal with sentinels instead of managing complex details associated with the PDs, VDs, and cloud computing infrastructures. A prototype based on the proposed method was implemented to conduct a case study based on a blockchain system for verifying contract violation in sensors used in product transportation logistics. The evaluation showed the effectiveness of sentinels enabling organizations to attain data from IoT sensors and the dataflows used by decision-making processes to convert these data into useful information.

RevDate: 2021-08-31
CmpDate: 2021-08-31

Schackart KE, JY Yoon (2021)

Machine Learning Enhances the Performance of Bioreceptor-Free Biosensors.

Sensors (Basel, Switzerland), 21(16):.

Since their inception, biosensors have frequently employed simple regression models to calculate analyte composition based on the biosensor's signal magnitude. Traditionally, bioreceptors provide excellent sensitivity and specificity to the biosensor. Increasingly, however, bioreceptor-free biosensors have been developed for a wide range of applications. Without a bioreceptor, maintaining strong specificity and a low limit of detection have become the major challenge. Machine learning (ML) has been introduced to improve the performance of these biosensors, effectively replacing the bioreceptor with modeling to gain specificity. Here, we present how ML has been used to enhance the performance of these bioreceptor-free biosensors. Particularly, we discuss how ML has been used for imaging, Enose and Etongue, and surface-enhanced Raman spectroscopy (SERS) biosensors. Notably, principal component analysis (PCA) combined with support vector machine (SVM) and various artificial neural network (ANN) algorithms have shown outstanding performance in a variety of tasks. We anticipate that ML will continue to improve the performance of bioreceptor-free biosensors, especially with the prospects of sharing trained models and cloud computing for mobile computation. To facilitate this, the biosensing community would benefit from increased contributions to open-access data repositories for biosensor data.

RevDate: 2021-08-30

Gupta D, Rani S, Ahmed SH, et al (2021)

Edge Caching Based on Collaborative Filtering for Heterogeneous ICN-IoT Applications.

Sensors (Basel, Switzerland), 21(16):.

The substantial advancements offered by the edge computing has indicated serious evolutionary improvements for the internet of things (IoT) technology. The rigid design philosophy of the traditional network architecture limits its scope to meet future demands. However, information centric networking (ICN) is envisioned as a promising architecture to bridge the huge gaps and maintain IoT networks, mostly referred as ICN-IoT. The edge-enabled ICN-IoT architecture always demands efficient in-network caching techniques for supporting better user's quality of experience (QoE). In this paper, we propose an enhanced ICN-IoT content caching strategy by enabling artificial intelligence (AI)-based collaborative filtering within the edge cloud to support heterogeneous IoT architecture. This collaborative filtering-based content caching strategy would intelligently cache content on edge nodes for traffic management at cloud databases. The evaluations has been conducted to check the performance of the proposed strategy over various benchmark strategies, such as LCE, LCD, CL4M, and ProbCache. The analytical results demonstrate the better performance of our proposed strategy with average gain of 15% for cache hit ratio, 12% reduction in content retrieval delay, and 28% reduced average hop count in comparison to best considered LCD. We believe that the proposed strategy will contribute an effective solution to the related studies in this domain.

RevDate: 2021-08-31
CmpDate: 2021-08-31

Wang Q, H Mu (2021)

Privacy-Preserving and Lightweight Selective Aggregation with Fault-Tolerance for Edge Computing-Enhanced IoT.

Sensors (Basel, Switzerland), 21(16):.

Edge computing has been introduced to the Internet of Things (IoT) to meet the requirements of IoT applications. At the same time, data aggregation is widely used in data processing to reduce the communication overhead and energy consumption in IoT. Most existing schemes aggregate the overall data without filtering. In addition, aggregation schemes also face huge challenges, such as the privacy of the individual IoT device's data or the fault-tolerant and lightweight requirements of the schemes. In this paper, we present a privacy-preserving and lightweight selective aggregation scheme with fault tolerance (PLSA-FT) for edge computing-enhanced IoT. In PLSA-FT, selective aggregation can be achieved by constructing Boolean responses and numerical responses according to specific query conditions of the cloud center. Furthermore, we modified the basic Paillier homomorphic encryption to guarantee data privacy and support fault tolerance of IoT devices' malfunctions. An online/offline signature mechanism is utilized to reduce computation costs. The system characteristic analyses prove that the PLSA-FT scheme achieves confidentiality, privacy preservation, source authentication, integrity verification, fault tolerance, and dynamic membership management. Moreover, performance evaluation results show that PLSA-FT is lightweight with low computation costs and communication overheads.

RevDate: 2021-08-31
CmpDate: 2021-08-31

Liu Y, Ni Z, Karlsson M, et al (2021)

Methodology for Digital Transformation with Internet of Things and Cloud Computing: A Practical Guideline for Innovation in Small- and Medium-Sized Enterprises.

Sensors (Basel, Switzerland), 21(16):.

Researches on the Internet of Things (IoT) and cloud computing have been pervasive in both the academic and industrial world. IoT and cloud computing are seen as cornerstones to digital transformation in the industry. However, restricted by limited resources and the lack of expertise in information and communication technologies, small- and medium-sized enterprises (SMEs) have difficulty in achieving digitalization of their business. In this paper, we propose a reference framework for SMEs to follow as a guideline in the journey of digital transformation. The framework features a three-stage procedure that covers business, technology, and innovation, which can be iterated to drive product and business development. A case study about digital transformation taking place in the vertical plant wall industry is detailed. Furthermore, some solution design principles that are concluded from real industrial practice are presented. This paper reviews the digital transformation practice in the vertical plant wall industry and aims to accelerate the pace of SMEs in the journey of digital transformation.

RevDate: 2021-08-31
CmpDate: 2021-08-31

Pérez-Pons ME, Alonso RS, García O, et al (2021)

Deep Q-Learning and Preference Based Multi-Agent System for Sustainable Agricultural Market.

Sensors (Basel, Switzerland), 21(16):.

Yearly population growth will lead to a significant increase in agricultural production in the coming years. Twenty-first century agricultural producers will be facing the challenge of achieving food security and efficiency. This must be achieved while ensuring sustainable agricultural systems and overcoming the problems posed by climate change, depletion of water resources, and the potential for increased erosion and loss of productivity due to extreme weather conditions. Those environmental consequences will directly affect the price setting process. In view of the price oscillations and the lack of transparent information for buyers, a multi-agent system (MAS) is presented in this article. It supports the making of decisions in the purchase of sustainable agricultural products. The proposed MAS consists of a system that supports decision-making when choosing a supplier on the basis of certain preference-based parameters aimed at measuring the sustainability of a supplier and a deep Q-learning agent for agricultural future market price forecast. Therefore, different agri-environmental indicators (AEIs) have been considered, as well as the use of edge computing technologies to reduce costs of data transfer to the cloud. The presented MAS combines price setting optimizations and user preferences in regards to accessing, filtering, and integrating information. The agents filter and fuse information relevant to a user according to supplier attributes and a dynamic environment. The results presented in this paper allow a user to choose the supplier that best suits their preferences as well as to gain insight on agricultural future markets price oscillations through a deep Q-learning agent.

RevDate: 2021-08-31
CmpDate: 2021-08-31

Ni Z, Liu Y, Karlsson M, et al (2021)

A Sensing System Based on Public Cloud to Monitor Indoor Environment of Historic Buildings.

Sensors (Basel, Switzerland), 21(16):.

Monitoring the indoor environment of historic buildings helps to identify potential risks, provide guidelines for improving regular maintenance, and preserve cultural artifacts. However, most of the existing monitoring systems proposed for historic buildings are not for general digitization purposes that provide data for smart services employing, e.g., artificial intelligence with machine learning. In addition, considering that preserving historic buildings is a long-term process that demands preventive maintenance, a monitoring system requires stable and scalable storage and computing resources. In this paper, a digitalization framework is proposed for smart preservation of historic buildings. A sensing system following the architecture of this framework is implemented by integrating various advanced digitalization techniques, such as Internet of Things, Edge computing, and Cloud computing. The sensing system realizes remote data collection, enables viewing real-time and historical data, and provides the capability for performing real-time analysis to achieve preventive maintenance of historic buildings in future research. Field testing results show that the implemented sensing system has a 2% end-to-end loss rate for collecting data samples and the loss rate can be decreased to 0.3%. The low loss rate indicates that the proposed sensing system has high stability and meets the requirements for long-term monitoring of historic buildings.

RevDate: 2021-09-13
CmpDate: 2021-09-13

Bussola N, Papa B, Melaiu O, et al (2021)

Quantification of the Immune Content in Neuroblastoma: Deep Learning and Topological Data Analysis in Digital Pathology.

International journal of molecular sciences, 22(16):.

We introduce here a novel machine learning (ML) framework to address the issue of the quantitative assessment of the immune content in neuroblastoma (NB) specimens. First, the EUNet, a U-Net with an EfficientNet encoder, is trained to detect lymphocytes on tissue digital slides stained with the CD3 T-cell marker. The training set consists of 3782 images extracted from an original collection of 54 whole slide images (WSIs), manually annotated for a total of 73,751 lymphocytes. Resampling strategies, data augmentation, and transfer learning approaches are adopted to warrant reproducibility and to reduce the risk of overfitting and selection bias. Topological data analysis (TDA) is then used to define activation maps from different layers of the neural network at different stages of the training process, described by persistence diagrams (PD) and Betti curves. TDA is further integrated with the uniform manifold approximation and projection (UMAP) dimensionality reduction and the hierarchical density-based spatial clustering of applications with noise (HDBSCAN) algorithm for clustering, by the deep features, the relevant subgroups and structures, across different levels of the neural network. Finally, the recent TwoNN approach is leveraged to study the variation of the intrinsic dimensionality of the U-Net model. As the main task, the proposed pipeline is employed to evaluate the density of lymphocytes over the whole tissue area of the WSIs. The model achieves good results with mean absolute error 3.1 on test set, showing significant agreement between densities estimated by our EUNet model and by trained pathologists, thus indicating the potentialities of a promising new strategy in the quantification of the immune content in NB specimens. Moreover, the UMAP algorithm unveiled interesting patterns compatible with pathological characteristics, also highlighting novel insights into the dynamics of the intrinsic dataset dimensionality at different stages of the training process. All the experiments were run on the Microsoft Azure cloud platform.

RevDate: 2021-09-03
CmpDate: 2021-09-03

Cai X, D Xu (2021)

Application of Edge Computing Technology in Hydrological Spatial Analysis and Ecological Planning.

International journal of environmental research and public health, 18(16):.

The process of rapid urbanization causes so many water security issues such as urban waterlogging, environmental water pollution, water shortages, etc. It is, therefore, necessary for us to integrate a variety of theories, methods, measures, and means to conduct ecological problem diagnosis, ecological function demand assessment, and ecological security pattern planning. Here, EC (Edge Computing) technology is applied to analyze the hydrological spatial structure characteristics and ecological planning method of waterfront green space. First, various information is collected and scientifically analyzed around the core element of ecological planning: water. Then, in-depth research is conducted on the previous hydrological spatial analysis methods to identify their defects. Subsequently, given these defects, the EC technology is introduced to design a bottom-up overall architecture of intelligent ecological planning gateway, which can be divided into field devices, EC intelligent planning gateway, transmission system, and cloud processing platform. Finally, the performance of the overall architecture of the intelligent ecological planning gateway is tested. The study aims to optimize the performance of the hydrological spatial analysis method and ecological planning method in Xianglan town of Jiamusi city. The results show that the system can detect the flood control safety system planning, analysis of water source pollution. Additionally, the system also can use the EC technology, depending on the types, hydrological characteristics, pollutants to predict treatment sludge need to put in the pollutant treatment medicament composition and dosage, protection of water source nearby residents public health security. Compared with previous hydrological spatial analysis and ecological planning methods, the system is more scientific, efficient, and expandable. The results provide a technical basis for the research in related fields.

RevDate: 2021-08-30

Spangler HD, Simancas-Pallares MA, Ginnis J, et al (2021)

A Web-Based Rendering Application for Communicating Dental Conditions.

Healthcare (Basel, Switzerland), 9(8):.

The importance of visual aids in communicating clinical examination findings or proposed treatments in dentistry cannot be overstated. Similarly, communicating dental research results with tooth surface-level precision is impractical without visual representations. Here, we present the development, deployment, and two real-life applications of a web-based data visualization informatics pipeline that converts tooth surface-level information to colorized, three-dimensional renderings. The core of the informatics pipeline focuses on texture (UV) mapping of a pre-existing model of the human primary dentition. The 88 individually segmented tooth surfaces receive independent inputs that are represented in colors and textures according to customizable user specifications. The web implementation SculptorHD, deployed on the Google Cloud Platform, can accommodate manually entered or spreadsheet-formatted tooth surface data and allows the customization of color palettes and thresholds, as well as surface textures (e.g., condition-free, caries lesions, stainless steel, or ceramic crowns). Its current implementation enabled the visualization and interpretation of clinical early childhood caries (ECC) subtypes using latent class analysis-derived caries experience summary data. As a demonstration of its potential clinical utility, the tool was also used to simulate the restorative treatment presentation of a severe ECC case, including the use of stainless steel and ceramic crowns. We expect that this publicly available web-based tool can aid clinicians and investigators deliver precise, visual presentations of dental conditions and proposed treatments. The creation of rapidly adjustable lifelike dental models, integrated to existing electronic health records and responsive to new clinical findings or planned for future work, is likely to boost two-way communication between clinicians and their patients.

RevDate: 2021-08-26

Lacey JV, JL Benbow (2021)

Standards, Inputs, and Outputs: Strategies for improving data-sharing and consortia-based epidemiologic research.

American journal of epidemiology pii:6357823 [Epub ahead of print].

Data sharing improves epidemiology research, but sharing data frustrates epidemiologic researchers. The inefficiencies of current methods and options for data-sharing are increasingly documented and easily understood by any study that has shared its data and any researcher who has received shared data. Temprosa and Moore et al. (Am J Epidemiol. XXXX;XXX(XX):XXXX-XXXX)) describe how the COnsortium of METabolomics Studies (COMETS) developed and deployed a flexible analytic platform to eliminate key pain points in large-scale metabolomics research. COMETS Analytics includes an online tool, but its cloud computing and technology are supporting, rather than the lead, actors in this script. The COMETS team identified the need to standardize diverse and inconsistent metabolomics and covariate data and models across its many participating cohort studies, and then they developed a flexible tool that gave its member studies choices about how they wanted to meet the consortium's analytic requirements. Different specialties will have different specific research needs and will likely continue to use and develop an array of diverse analytic and technical solutions for their projects. COMETS Analytics shows how important and enabling the upstream attention to data standards and data consistency are to producing high-quality metabolomics, consortium-based, and large-scale epidemiology research.

RevDate: 2021-08-27

Edu AS, Agoyi M, D Agozie (2021)

Digital security vulnerabilities and threats implications for financial institutions deploying digital technology platforms and application: FMEA and FTOPSIS analysis.

PeerJ. Computer science, 7:e658.

Digital disruptions have led to the integration of applications, platforms, and infrastructure. They assist in business operations, promoting open digital collaborations, and perhaps even the integration of the Internet of Things (IoTs), Big Data Analytics, and Cloud Computing to support data sourcing, data analytics, and storage synchronously on a single platform. Notwithstanding the benefits derived from digital technology integration (including IoTs, Big Data Analytics, and Cloud Computing), digital vulnerabilities and threats have become a more significant concern for users. We addressed these challenges from an information systems perspective and have noted that more research is needed identifying potential vulnerabilities and threats affecting the integration of IoTs, BDA and CC for data management. We conducted a step-by-step analysis of the potential vulnerabilities and threats affecting the integration of IoTs, Big Data Analytics, and Cloud Computing for data management. We combined multi-dimensional analysis, Failure Mode Effect Analysis, and Fuzzy Technique for Order of Preference by Similarity for Ideal Solution to evaluate and rank the potential vulnerabilities and threats. We surveyed 234 security experts from the banking industry with adequate knowledge in IoTs, Big Data Analytics, and Cloud Computing. Based on the closeness of the coefficients, we determined that insufficient use of backup electric generators, firewall protection failures, and no information security audits are high-ranking vulnerabilities and threats affecting integration. This study is an extension of discussions on the integration of digital applications and platforms for data management and the pervasive vulnerabilities and threats arising from that. A detailed review and classification of these threats and vulnerabilities are vital for sustaining businesses' digital integration.

RevDate: 2021-08-28

Mohd Romlay MR, Mohd Ibrahim A, Toha SF, et al (2021)

Novel CE-CBCE feature extraction method for object classification using a low-density LiDAR point cloud.

PloS one, 16(8):e0256665.

Low-end LiDAR sensor provides an alternative for depth measurement and object recognition for lightweight devices. However due to low computing capacity, complicated algorithms are incompatible to be performed on the device, with sparse information further limits the feature available for extraction. Therefore, a classification method which could receive sparse input, while providing ample leverage for the classification process to accurately differentiate objects within limited computing capability is required. To achieve reliable feature extraction from a sparse LiDAR point cloud, this paper proposes a novel Clustered Extraction and Centroid Based Clustered Extraction Method (CE-CBCE) method for feature extraction followed by a convolutional neural network (CNN) object classifier. The integration of the CE-CBCE and CNN methods enable us to utilize lightweight actuated LiDAR input and provides low computing means of classification while maintaining accurate detection. Based on genuine LiDAR data, the final result shows reliable accuracy of 97% through the method proposed.

RevDate: 2021-08-26

Zhao J, Yu L, Liu H, et al (2021)

Towards an open and synergistic framework for mapping global land cover.

PeerJ, 9:e11877.

Global land-cover datasets are key sources of information for understanding the complex inter-actions between human activities and global change. They are also among the most critical variables for climate change studies. Over time, the spatial resolution of land cover maps has increased from the kilometer scale to 10-m scale. Single-type historical land cover datasets, including for forests, water, and impervious surfaces, have also been developed in recent years. In this study, we present an open and synergy framework to produce a global land cover dataset that combines supervised land cover classification and aggregation of existing multiple thematic land cover maps with the Google Earth Engine (GEE) cloud computing platform. On the basis of this method of classification and mosaicking, we derived a global land cover dataset for 6 years over a time span of 25 years. The overall accuracies of the six maps were around 75% and the accuracy for change area detection was over 70%. Our product also showed good similarity with the FAO and existing land cover maps.

RevDate: 2021-08-27
CmpDate: 2021-08-25

Reddy S, Hung LH, Sala-Torra O, et al (2021)

A graphical, interactive and GPU-enabled workflow to process long-read sequencing data.

BMC genomics, 22(1):626.

BACKGROUND: Long-read sequencing has great promise in enabling portable, rapid molecular-assisted cancer diagnoses. A key challenge in democratizing long-read sequencing technology in the biomedical and clinical community is the lack of graphical bioinformatics software tools which can efficiently process the raw nanopore reads, support graphical output and interactive visualizations for interpretations of results. Another obstacle is that high performance software tools for long-read sequencing data analyses often leverage graphics processing units (GPU), which is challenging and time-consuming to configure, especially on the cloud.

RESULTS: We present a graphical cloud-enabled workflow for fast, interactive analysis of nanopore sequencing data using GPUs. Users customize parameters, monitor execution and visualize results through an accessible graphical interface. The workflow and its components are completely containerized to ensure reproducibility and facilitate installation of the GPU-enabled software. We also provide an Amazon Machine Image (AMI) with all software and drivers pre-installed for GPU computing on the cloud. Most importantly, we demonstrate the potential of applying our software tools to reduce the turnaround time of cancer diagnostics by generating blood cancer (NB4, K562, ME1, 238 MV4;11) cell line Nanopore data using the Flongle adapter. We observe a 29x speedup and a 93x reduction in costs for the rate-limiting basecalling step in the analysis of blood cancer cell line data.

CONCLUSIONS: Our interactive and efficient software tools will make analyses of Nanopore data using GPU and cloud computing accessible to biomedical and clinical scientists, thus facilitating the adoption of cost effective, fast, portable and real-time long-read sequencing.

RevDate: 2021-08-26

Zhao Y, Sazlina SG, Rokhani FZ, et al (2021)

The expectations and acceptability of a smart nursing home model among Chinese elderly people: A mixed methods study protocol.

PloS one, 16(8):e0255865.

Nursing homes integrated with smart information such as the Internet of Things, cloud computing, artificial intelligence, and digital health could improve not only the quality of care but also benefit the residents and health professionals by providing effective care and efficient medical services. However, a clear concept of a smart nursing home, the expectations and acceptability from the perspectives of the elderly people and their family members are still unclear. In addition, instruments to measure the expectations and acceptability of a smart nursing home are also lacking. The study aims to explore and determine the levels of these expectations, acceptability and the associated sociodemographic factors. This exploratory sequential mixed methods study comprises a qualitative study which will be conducted through a semi-structured interview to explore the expectations and acceptability of a smart nursing home among Chinese elderly people and their family members (Phase I). Next, a questionnaire will be developed and validated based on the results of a qualitative study in Phase I and a preceding scoping review on smart nursing homes by the same authors (Phase II). Lastly, a nationwide survey will be carried out to examine the levels of expectations and acceptability, and the associated sociodemographic factors with the different categories of expectations and acceptability (Phase III). With a better understanding of the Chinese elderly people's expectations and acceptability of smart technologies in nursing homes, a feasible smart nursing home model that incorporates appropriate technologies, integrates needed medical services and business concepts could be formulated and tested as a solution for the rapidly ageing societies in many developed and developing countries.

RevDate: 2021-09-22

Tahmasebi A, Qu E, Sevrukov A, et al (2021)

Assessment of Axillary Lymph Nodes for Metastasis on Ultrasound Using Artificial Intelligence.

Ultrasonic imaging, 43(6):329-336.

The purpose of this study was to evaluate an artificial intelligence (AI) system for the classification of axillary lymph nodes on ultrasound compared to radiologists. Ultrasound images of 317 axillary lymph nodes from patients referred for ultrasound guided fine needle aspiration or core needle biopsy and corresponding pathology findings were collected. Lymph nodes were classified into benign and malignant groups with histopathological result serving as the reference. Google Cloud AutoML Vision (Mountain View, CA) was used for AI image classification. Three experienced radiologists also classified the images and gave a level of suspicion score (1-5). To test the accuracy of AI, an external testing dataset of 64 images from 64 independent patients was evaluated by three AI models and the three readers. The diagnostic performance of AI and the humans were then quantified using receiver operating characteristics curves. In the complete set of 317 images, AutoML achieved a sensitivity of 77.1%, positive predictive value (PPV) of 77.1%, and an area under the precision recall curve of 0.78, while the three radiologists showed a sensitivity of 87.8% ± 8.5%, specificity of 50.3% ± 16.4%, PPV of 61.1% ± 5.4%, negative predictive value (NPV) of 84.1% ± 6.6%, and accuracy of 67.7% ± 5.7%. In the three external independent test sets, AI and human readers achieved sensitivity of 74.0% ± 0.14% versus 89.9% ± 0.06% (p = .25), specificity of 64.4% ± 0.11% versus 50.1 ± 0.20% (p = .22), PPV of 68.3% ± 0.04% versus 65.4 ± 0.07% (p = .50), NPV of 72.6% ± 0.11% versus 82.1% ± 0.08% (p = .33), and accuracy of 69.5% ± 0.06% versus 70.1% ± 0.07% (p = .90), respectively. These preliminary results indicate AI has comparable performance to trained radiologists and could be used to predict the presence of metastasis in ultrasound images of axillary lymph nodes.

RevDate: 2021-08-22

Khashan E, Eldesouky A, S Elghamrawy (2021)

An adaptive spark-based framework for querying large-scale NoSQL and relational databases.

PloS one, 16(8):e0255562.

The growing popularity of big data analysis and cloud computing has created new big data management standards. Sometimes, programmers may interact with a number of heterogeneous data stores depending on the information they are responsible for: SQL and NoSQL data stores. Interacting with heterogeneous data models via numerous APIs and query languages imposes challenging tasks on multi-data processing developers. Indeed, complex queries concerning homogenous data structures cannot currently be performed in a declarative manner when found in single data storage applications and therefore require additional development efforts. Many models were presented in order to address complex queries Via multistore applications. Some of these models implemented a complex unified and fast model, while others' efficiency is not good enough to solve this type of complex database queries. This paper provides an automated, fast and easy unified architecture to solve simple and complex SQL and NoSQL queries over heterogeneous data stores (CQNS). This proposed framework can be used in cloud environments or for any big data application to automatically help developers to manage basic and complicated database queries. CQNS consists of three layers: matching selector layer, processing layer, and query execution layer. The matching selector layer is the heart of this architecture in which five of the user queries are examined if they are matched with another five queries stored in a single engine stored in the architecture library. This is achieved through a proposed algorithm that directs the query to the right SQL or NoSQL database engine. Furthermore, CQNS deal with many NoSQL Databases like MongoDB, Cassandra, Riak, CouchDB, and NOE4J databases. This paper presents a spark framework that can handle both SQL and NoSQL Databases. Four scenarios' benchmarks datasets are used to evaluate the proposed CQNS for querying different NoSQL Databases in terms of optimization process performance and query execution time. The results show that, the CQNS achieves best latency and throughput in less time among the compared systems.

RevDate: 2021-08-20

Miao Y, Hao Y, Chen M, et al (2021)

Intelligent Task Caching in Edge Cloud via Bandit Learning.

IEEE transactions on network science and engineering, 8(1):.

Task caching, based on edge cloud, aims to meet the latency requirements of computation-intensive and data-intensive tasks (such as augmented reality). However, current task caching strategies are generally based on the unrealistic assumption of knowing the pattern of user task requests and ignoring the fact that a task request pattern is more user specific (e.g., the mobility and personalized task demand). Moreover, it disregards the impact of task size and computing amount on the caching strategy. To investigate these issues, in this paper, we first formalize the task caching problem as a non-linear integer programming problem to minimize task latency. We then design a novel intelligent task caching algorithm based on a multiarmed bandit algorithm, called M-adaptive upper confidence bound (M-AUCB). The proposed caching strategy cannot only learn the task patterns of mobile device requests online, but can also dynamically adjust the caching strategy to incorporate the size and computing amount of each task. Moreover, we prove that the M-AUCB algorithm achieves a sublinear regret bound. The results show that, compared with other task caching schemes, the M-AUCB algorithm reduces the average task latency by at least 14.8%.

RevDate: 2021-09-15

Fox CB, Israelsen-Augenstein M, Jones S, et al (2021)

An Evaluation of Expedited Transcription Methods for School-Age Children's Narrative Language: Automatic Speech Recognition and Real-Time Transcription.

Journal of speech, language, and hearing research : JSLHR, 64(9):3533-3548.

Purpose This study examined the accuracy and potential clinical utility of two expedited transcription methods for narrative language samples elicited from school-age children (7;5-11;10 [years;months]) with developmental language disorder. Transcription methods included real-time transcription produced by speech-language pathologists (SLPs) and trained transcribers (TTs) as well as Google Cloud Speech automatic speech recognition. Method The accuracy of each transcription method was evaluated against a gold-standard reference corpus. Clinical utility was examined by determining the reliability of scores calculated from the transcripts produced by each method on several language sample analysis (LSA) measures. Participants included seven certified SLPs and seven TTs. Each participant was asked to produce a set of six transcripts in real time, out of a total 42 language samples. The same 42 samples were transcribed using Google Cloud Speech. Transcription accuracy was evaluated through word error rate. Reliability of LSA scores was determined using correlation analysis. Results Results indicated that Google Cloud Speech was significantly more accurate than real-time transcription in transcribing narrative samples and was not impacted by speech rate of the narrator. In contrast, SLP and TT transcription accuracy decreased as a function of increasing speech rate. LSA metrics generated from Google Cloud Speech transcripts were also more reliably calculated. Conclusions Automatic speech recognition showed greater accuracy and clinical utility as an expedited transcription method than real-time transcription. Though there is room for improvement in the accuracy of speech recognition for the purpose of clinical transcription, it produced highly reliable scores on several commonly used LSA metrics. Supplemental Material https://doi.org/10.23641/asha.15167355.

RevDate: 2021-08-21

Edwards T, Jones CB, Perkins SE, et al (2021)

Passive citizen science: The role of social media in wildlife observations.

PloS one, 16(8):e0255416.

Citizen science plays an important role in observing the natural environment. While conventional citizen science consists of organized campaigns to observe a particular phenomenon or species there are also many ad hoc observations of the environment in social media. These data constitute a valuable resource for 'passive citizen science'-the use of social media that are unconnected to any particular citizen science program, but represent an untapped dataset of ecological value. We explore the value of passive citizen science, by evaluating species distributions using the photo sharing site Flickr. The data are evaluated relative to those submitted to the National Biodiversity Network (NBN) Atlas, the largest collection of species distribution data in the UK. Our study focuses on the 1500 best represented species on NBN, and common invasive species within UK, and compares the spatial and temporal distribution with NBN data. We also introduce an innovative image verification technique that uses the Google Cloud Vision API in combination with species taxonomic data to determine the likelihood that a mention of a species on Flickr represents a given species. The spatial and temporal analyses for our case studies suggest that the Flickr dataset best reflects the NBN dataset when considering a purely spatial distribution with no time constraints. The best represented species on Flickr in comparison to NBN are diurnal garden birds as around 70% of the Flickr posts for them are valid observations relative to the NBN. Passive citizen science could offer a rich source of observation data for certain taxonomic groups, and/or as a repository for dedicated projects. Our novel method of validating Flickr records is suited to verifying more extensive collections, including less well-known species, and when used in combination with citizen science projects could offer a platform for accurate identification of species and their location.

RevDate: 2021-08-23

Lv C, Lin W, B Zhao (2021)

Approximate Intrinsic Voxel Structure for Point Cloud Simplification.

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 30:7241-7255.

A point cloud as an information-intensive 3D representation usually requires a large amount of transmission, storage and computing resources, which seriously hinder its usage in many emerging fields. In this paper, we propose a novel point cloud simplification method, Approximate Intrinsic Voxel Structure (AIVS), to meet the diverse demands in real-world application scenarios. The method includes point cloud pre-processing (denoising and down-sampling), AIVS-based realization for isotropic simplification and flexible simplification with intrinsic control of point distance. To demonstrate the effectiveness of the proposed AIVS-based method, we conducted extensive experiments by comparing it with several relevant point cloud simplification methods on three public datasets, including Stanford, SHREC, and RGB-D scene models. The experimental results indicate that AIVS has great advantages over peers in terms of moving least squares (MLS) surface approximation quality, curvature-sensitive sampling, sharp-feature keeping and processing speed. The source code of the proposed method is publicly available. (https://github.com/vvvwo/AIVS-project).

RevDate: 2021-08-17

Markus A, Biro M, Kecskemeti G, et al (2021)

Actuator behaviour modelling in IoT-Fog-Cloud simulation.

PeerJ. Computer science, 7:e651 pii:cs-651.

The inevitable evolution of information technology has led to the creation of IoT-Fog-Cloud systems, which combine the Internet of Things (IoT), Cloud Computing and Fog Computing. IoT systems are composed of possibly up to billions of smart devices, sensors and actuators connected through the Internet, and these components continuously generate large amounts of data. Cloud and fog services assist the data processing and storage needs of IoT devices. The behaviour of these devices can change dynamically (e.g. properties of data generation or device states). We refer to systems allowing behavioural changes in physical position (i.e. geolocation), as the Internet of Mobile Things (IoMT). The investigation and detailed analysis of such complex systems can be fostered by simulation solutions. The currently available, related simulation tools are lacking a generic actuator model including mobility management. In this paper, we present an extension of the DISSECT-CF-Fog simulator to support the analysis of arbitrary actuator events and mobility capabilities of IoT devices in IoT-Fog-Cloud systems. The main contributions of our work are: (i) a generic actuator model and its implementation in DISSECT-CF-Fog, and (ii) the evaluation of its use through logistics and healthcare scenarios. Our results show that we can successfully model IoMT systems and behavioural changes of actuators in IoT-Fog-Cloud systems in general, and analyse their management issues in terms of usage cost and execution time.

RevDate: 2021-08-17

M VK, Venkatachalam K, P P, et al (2021)

Secure biometric authentication with de-duplication on distributed cloud storage.

PeerJ. Computer science, 7:e569 pii:cs-569.

Cloud computing is one of the evolving fields of technology, which allows storage, access of data, programs, and their execution over the internet with offering a variety of information related services. With cloud information services, it is essential for information to be saved securely and to be distributed safely across numerous users. Cloud information storage has suffered from issues related to information integrity, data security, and information access by unauthenticated users. The distribution and storage of data among several users are highly scalable and cost-efficient but results in data redundancy and security issues. In this article, a biometric authentication scheme is proposed for the requested users to give access permission in a cloud-distributed environment and, at the same time, alleviate data redundancy. To achieve this, a cryptographic technique is used by service providers to generate the bio-key for authentication, which will be accessible only to authenticated users. A Gabor filter with distributed security and encryption using XOR operations is used to generate the proposed bio-key (biometric generated key) and avoid data deduplication in the cloud, ensuring avoidance of data redundancy and security. The proposed method is compared with existing algorithms, such as convergent encryption (CE), leakage resilient (LR), randomized convergent encryption (RCE), secure de-duplication scheme (SDS), to evaluate the de-duplication performance. Our comparative analysis shows that our proposed scheme results in smaller computation and communication costs than existing schemes.

RevDate: 2021-09-15

Bloom JD (2021)

Recovery of deleted deep sequencing data sheds more light on the early Wuhan SARS-CoV-2 epidemic.

Molecular biology and evolution [Epub ahead of print].

The origin and early spread of SARS-CoV-2 remains shrouded in mystery. Here I identify a data set containing SARS-CoV-2 sequences from early in the Wuhan epidemic that has been deleted from the NIH's Sequence Read Archive. I recover the deleted files from the Google Cloud, and reconstruct partial sequences of 13 early epidemic viruses. Phylogenetic analysis of these sequences in the context of carefully annotated existing data further supports the idea that the Huanan Seafood Market sequences are not fully representative of the viruses in Wuhan early in the epidemic. Instead, the progenitor of currently known SARS-CoV-2 sequences likely contained three mutations relative to the market viruses that made it more similar to SARS-CoV-2's bat coronavirus relatives.

RevDate: 2021-08-17

Honorato RV, Koukos PI, Jiménez-García B, et al (2021)

Structural Biology in the Clouds: The WeNMR-EOSC Ecosystem.

Frontiers in molecular biosciences, 8:729513.

Structural biology aims at characterizing the structural and dynamic properties of biological macromolecules at atomic details. Gaining insight into three dimensional structures of biomolecules and their interactions is critical for understanding the vast majority of cellular processes, with direct applications in health and food sciences. Since 2010, the WeNMR project (www.wenmr.eu) has implemented numerous web-based services to facilitate the use of advanced computational tools by researchers in the field, using the high throughput computing infrastructure provided by EGI. These services have been further developed in subsequent initiatives under H2020 projects and are now operating as Thematic Services in the European Open Science Cloud portal (www.eosc-portal.eu), sending >12 millions of jobs and using around 4,000 CPU-years per year. Here we review 10 years of successful e-infrastructure solutions serving a large worldwide community of over 23,000 users to date, providing them with user-friendly, web-based solutions that run complex workflows in structural biology. The current set of active WeNMR portals are described, together with the complex backend machinery that allows distributed computing resources to be harvested efficiently.

RevDate: 2021-08-16

Aguirre Montero A, JA López-Sánchez (2021)

Intersection of Data Science and Smart Destinations: A Systematic Review.

Frontiers in psychology, 12:712610.

This systematic review adopts a formal and structured approach to review the intersection of data science and smart tourism destinations in terms of components found in previous research. The study period corresponds to 1995-2021 focusing the analysis mainly on the last years (2015-2021), identifying and characterizing the current trends on this research topic. The review comprises documentary research based on bibliometric and conceptual analysis, using the VOSviewer and SciMAT software to analyze articles from the Web of Science database. There is growing interest in this research topic, with more than 300 articles published annually. Data science technologies on which current smart destinations research is based include big data, smart data, data analytics, social media, cloud computing, the internet of things (IoT), smart card data, geographic information system (GIS) technologies, open data, artificial intelligence, and machine learning. Critical research areas for data science techniques and technologies in smart destinations are public tourism marketing, mobility-accessibility, and sustainability. Data analysis techniques and technologies face unprecedented challenges and opportunities post-coronavirus disease-2019 (COVID-19) to build on the huge amount of data and a new tourism model that is more sustainable, smarter, and safer than those previously implemented.

RevDate: 2021-08-17

Nour B, Mastorakis S, A Mtibaa (2020)

Compute-Less Networking: Perspectives, Challenges, and Opportunities.

IEEE network, 34(6):259-265.

Delay-sensitive applications have been driving the move away from cloud computing, which cannot meet their low-latency requirements. Edge computing and programmable switches have been among the first steps toward pushing computation closer to end-users in order to reduce cost, latency, and overall resource utilization. This article presents the "compute-less" paradigm, which builds on top of the well known edge computing paradigm through a set of communication and computation optimization mechanisms (e.g.,, in-network computing, task clustering and aggregation, computation reuse). The main objective of the compute-less paradigm is to reduce the migration of computation and the usage of network and computing resources, while maintaining high Quality of Experience for end-users. We discuss the new perspectives, challenges, limitations, and opportunities of this compute-less paradigm.

RevDate: 2021-08-18

Szamosfalvi B, Heung M, L Yessayan (2021)

Technology Innovations in Continuous Kidney Replacement Therapy: The Clinician's Perspective.

Advances in chronic kidney disease, 28(1):3-12.

Continuous kidney replacement therapy (CKRT) has improved remarkably since its first implementation as continuous arteriovenous hemofiltration in the 1970s. However, when looking at the latest generation of CKRT machines, one could argue that clinical deployment of breakthrough innovations by device manufacturers has slowed in the last decade. Simultaneously, there has been a steady accumulation of clinical knowledge using CKRT as well as a multitude of therapeutic and diagnostic innovations in the dialysis and broader intensive care unit technology fields adaptable to CKRT. These include multiple different anticlotting measures; cloud-computing for optimized treatment prescribing and delivered therapy data collection and analysis; novel blood purification techniques aimed at improving the severe multiorgan dysfunction syndrome; and real-time sensing of blood and/or filter effluent composition. The authors present a view of how CKRT devices and programs could be reimagined incorporating these innovations to achieve specific measurable clinical outcomes with personalized care and improved simplicity, safety, and efficacy of CKRT therapy.

RevDate: 2021-08-12

Ronquillo JG, WT Lester (2021)

Practical Aspects of Implementing and Applying Health Care Cloud Computing Services and Informatics to Cancer Clinical Trial Data.

JCO clinical cancer informatics, 5:826-832.

PURPOSE: Cloud computing has led to dramatic growth in the volume, variety, and velocity of cancer data. However, cloud platforms and services present new challenges for cancer research, particularly in understanding the practical tradeoffs between cloud performance, cost, and complexity. The goal of this study was to describe the practical challenges when using a cloud-based service to improve the cancer clinical trial matching process.

METHODS: We collected information for all interventional cancer clinical trials from ClinicalTrials.gov and used the Google Cloud Healthcare Natural Language Application Programming Interface (API) to analyze clinical trial Title and Eligibility Criteria text. An informatics pipeline leveraging interoperability standards summarized the distribution of cancer clinical trials, genes, laboratory tests, and medications extracted from cloud-based entity analysis.

RESULTS: There were a total of 38,851 cancer-related clinical trials found in this study, with the distribution of cancer categories extracted from Title text significantly different than in ClinicalTrials.gov (P < .001). Cloud-based entity analysis of clinical trial criteria identified a total of 949 genes, 1,782 laboratory tests, 2,086 medications, and 4,902 National Cancer Institute Thesaurus terms, with estimated detection accuracies ranging from 12.8% to 89.9%. A total of 77,702 API calls processed an estimated 167,179 text records, which took a total of 1,979 processing-minutes (33.0 processing-hours), or approximately 1.5 seconds per API call.

CONCLUSION: Current general-purpose cloud health care tools-like the Google service in this study-should not be used for automated clinical trial matching unless they can perform effective extraction and classification of the clinical, genetic, and medication concepts central to precision oncology research. A strong understanding of the practical aspects of cloud computing will help researchers effectively navigate the vast data ecosystems in cancer research.

RevDate: 2021-08-30

Paul G, Abele ND, K Kluth (2021)

A Review and Qualitative Meta-Analysis of Digital Human Modeling and Cyber-Physical-Systems in Ergonomics 4.0.

IISE transactions on occupational ergonomics and human factors pii:10.1080/24725838.2021.1966130 [Epub ahead of print].

Occupational ApplicationsFounded in an empirical case study and theoretical work, this paper reviews the scientific literature to define the role of Digital Human Modeling (DHM), Digital Twin (DT), and Cyber-Physical Systems (CPS) to inform the emerging concept of Ergonomics 4.0. We find that DHM evolved into DT is a core element in Ergonomics 4.0. A solid understanding and agreement on the nature of Ergonomics 4.0 is essential for the inclusion of ergonomic values and considerations in the larger conceptual framework of Industry 4.0. In this context, we invite Ergonomists from various disciplines to broaden their understanding and application of DHM and DT.

RevDate: 2021-08-12

Koppad S, B A, Gkoutos GV, et al (2021)

Cloud Computing Enabled Big Multi-Omics Data Analytics.

Bioinformatics and biology insights, 15:11779322211035921.

High-throughput experiments enable researchers to explore complex multifactorial diseases through large-scale analysis of omics data. Challenges for such high-dimensional data sets include storage, analyses, and sharing. Recent innovations in computational technologies and approaches, especially in cloud computing, offer a promising, low-cost, and highly flexible solution in the bioinformatics domain. Cloud computing is rapidly proving increasingly useful in molecular modeling, omics data analytics (eg, RNA sequencing, metabolomics, or proteomics data sets), and for the integration, analysis, and interpretation of phenotypic data. We review the adoption of advanced cloud-based and big data technologies for processing and analyzing omics data and provide insights into state-of-the-art cloud bioinformatics applications.

RevDate: 2021-08-13

Chaudhuri S, Han H, Monaghan C, et al (2021)

Real-time prediction of intradialytic relative blood volume: a proof-of-concept for integrated cloud computing infrastructure.

BMC nephrology, 22(1):274.

BACKGROUND: Inadequate refilling from extravascular compartments during hemodialysis can lead to intradialytic symptoms, such as hypotension, nausea, vomiting, and cramping/myalgia. Relative blood volume (RBV) plays an important role in adapting the ultrafiltration rate which in turn has a positive effect on intradialytic symptoms. It has been clinically challenging to identify changes RBV in real time to proactively intervene and reduce potential negative consequences of volume depletion. Leveraging advanced technologies to process large volumes of dialysis and machine data in real time and developing prediction models using machine learning (ML) is critical in identifying these signals.

METHOD: We conducted a proof-of-concept analysis to retrospectively assess near real-time dialysis treatment data from in-center patients in six clinics using Optical Sensing Device (OSD), during December 2018 to August 2019. The goal of this analysis was to use real-time OSD data to predict if a patient's relative blood volume (RBV) decreases at a rate of at least - 6.5 % per hour within the next 15 min during a dialysis treatment, based on 10-second windows of data in the previous 15 min. A dashboard application was constructed to demonstrate how reporting structures may be developed to alert clinicians in real time of at-risk cases. Data was derived from three sources: (1) OSDs, (2) hemodialysis machines, and (3) patient electronic health records.

RESULTS: Treatment data from 616 in-center dialysis patients in the six clinics was curated into a big data store and fed into a Machine Learning (ML) model developed and deployed within the cloud. The threshold for classifying observations as positive or negative was set at 0.08. Precision for the model at this threshold was 0.33 and recall was 0.94. The area under the receiver operating curve (AUROC) for the ML model was 0.89 using test data.

CONCLUSIONS: The findings from our proof-of concept analysis demonstrate the design of a cloud-based framework that can be used for making real-time predictions of events during dialysis treatments. Making real-time predictions has the potential to assist clinicians at the point of care during hemodialysis.

RevDate: 2021-08-13

Ismail L, H Materwala (2021)

ESCOVE: Energy-SLA-Aware Edge-Cloud Computation Offloading in Vehicular Networks.

Sensors (Basel, Switzerland), 21(15): pii:s21155233.

The vehicular network is an emerging technology in the Intelligent Smart Transportation era. The network provides mechanisms for running different applications, such as accident prevention, publishing and consuming services, and traffic flow management. In such scenarios, edge and cloud computing come into the picture to offload computation from vehicles that have limited processing capabilities. Optimizing the energy consumption of the edge and cloud servers becomes crucial. However, existing research efforts focus on either vehicle or edge energy optimization, and do not account for vehicular applications' quality of services. In this paper, we address this void by proposing a novel offloading algorithm, ESCOVE, which optimizes the energy of the edge-cloud computing platform. The proposed algorithm respects the Service level agreement (SLA) in terms of latency, processing and total execution times. The experimental results show that ESCOVE is a promising approach in energy savings while preserving SLAs compared to the state-of-the-art approach.

RevDate: 2021-08-17
CmpDate: 2021-08-11

Gahm NA, Rueden CT, Evans EL, et al (2021)

New Extensibility and Scripting Tools in the ImageJ Ecosystem.

Current protocols, 1(8):e204.

ImageJ provides a framework for image processing across scientific domains while being fully open source. Over the years ImageJ has been substantially extended to support novel applications in scientific imaging as they emerge, particularly in the area of biological microscopy, with functionality made more accessible via the Fiji distribution of ImageJ. Within this software ecosystem, work has been done to extend the accessibility of ImageJ to utilize scripting, macros, and plugins in a variety of programming scenarios, e.g., from Groovy and Python and in Jupyter notebooks and cloud computing. We provide five protocols that demonstrate the extensibility of ImageJ for various workflows in image processing. We focus first on Fluorescence Lifetime Imaging Microscopy (FLIM) data, since this requires significant processing to provide quantitative insights into the microenvironments of cells. Second, we show how ImageJ can now be utilized for common image processing techniques, specifically image deconvolution and inversion, while highlighting the new, built-in features of ImageJ-particularly its capacity to run completely headless and the Ops matching feature that selects the optimal algorithm for a given function and data input, thereby enabling processing speedup. Collectively, these protocols can be used as a basis for automating biological image processing workflows. © 2021 Wiley Periodicals LLC. Basic Protocol 1: Using PyImageJ for FLIM data processing Alternate Protocol: Groovy FLIMJ in Jupyter Notebooks Basic Protocol 2: Using ImageJ Ops for image deconvolution Support Protocol 1: Using ImageJ Ops matching feature for image inversion Support Protocol 2: Headless ImageJ deconvolution.

RevDate: 2021-08-10

Su P, Chen Y, M Lu (2021)

Smart city information processing under internet of things and cloud computing.

The Journal of supercomputing [Epub ahead of print].

This study is to explore the smart city information (SCI) processing technology based on the Internet of Things (IoT) and cloud computing, promoting the construction of smart cities in the direction of effective sharing and interconnection. In this study, a SCI system is constructed based on the information islands in the smart construction of various fields in smart cities. The smart environment monitoring, smart transportation, and smart epidemic prevention at the application layer of the SCI system are designed separately. A multi-objective optimization algorithm for cloud computing virtual machine resource allocation method (CC-VMRA method) is proposed, and the application of the IoT and cloud computing technology in the smart city information system is further analysed and simulated for the performance verification. The results show that the multi-objective optimization algorithm in the CC-VMRA method can greatly reduce the number of physical servers in the SCI system (less than 20), and the variance is not higher than 0.0024, which can enable the server cluster to achieve better load balancing effects. In addition, the packet loss rate of the Zigbee protocol used by the IoT gateway in the SCI system is far below the 0.1% indicator, and the delay is less than 10 ms. Therefore, the SCI system constructed by this study shows low latency and high utilization rate, which can provide experimental reference for the later construction of smart city.

RevDate: 2021-08-09

Prakash A, Mahoney KE, BC Orsburn (2021)

Cloud Computing Based Immunopeptidomics Utilizing Community Curated Variant Libraries Simplifies and Improves Neo-Antigen Discovery in Metastatic Melanoma.

Cancers, 13(15):.

Unique peptide neo-antigens presented on the cell surface are attractive targets for researchers in nearly all areas of personalized medicine. Cells presenting peptides with mutated or other non-canonical sequences can be utilized for both targeted therapies and diagnostics. Today's state-of-the-art pipelines utilize complementary proteogenomic approaches where RNA or ribosomal sequencing data helps to create libraries from which tandem mass spectrometry data can be compared. In this study, we present an alternative approach whereby cloud computing is utilized to power neo-antigen searches against community curated databases containing more than 7 million human sequence variants. Using these expansive databases of high-quality sequences as a reference, we reanalyze the original data from two previously reported studies to identify neo-antigen targets in metastatic melanoma. Using our approach, we identify 79 percent of the non-canonical peptides reported by previous genomic analyses of these files. Furthermore, we report 18-fold more non-canonical peptides than previously reported. The novel neo-antigens we report herein can be corroborated by secondary analyses such as high predicted binding affinity, when analyzed by well-established tools such as NetMHC. Finally, we report 738 non-canonical peptides shared by at least five patient samples, and 3258 shared across the two studies. This illustrates the depth of data that is present, but typically missed by lower statistical power proteogenomic approaches. This large list of shared peptides across the two studies, their annotation, non-canonical origin, as well as MS/MS spectra from the two studies are made available on a web portal for community analysis.

RevDate: 2021-08-06

Narayanan KL, Krishnan RS, Son LH, et al (2021)

Fuzzy Guided Autonomous Nursing Robot through Wireless Beacon Network.

Multimedia tools and applications [Epub ahead of print].

Robotics is one of the most emerging technologies today, and are used in a variety of applications, ranging from complex rocket technology to monitoring of crops in agriculture. Robots can be exceptionally useful in a smart hospital environment provided that they are equipped with improved vision capabilities for detection and avoidance of obstacles present in their path, thus allowing robots to perform their tasks without any disturbance. In the particular case of Autonomous Nursing Robots, major essential issues are effective robot path planning for the delivery of medicines to patients, measuring the patient body parameters through sensors, interacting with and informing the patient, by means of voice-based modules, about the doctors visiting schedule, his/her body parameter details, etc. This paper presents an approach of a complete Autonomous Nursing Robot which supports all the aforementioned tasks. In this paper, we present a new Autonomous Nursing Robot system capable of operating in a smart hospital environment area. The objective of the system is to identify the patient room, perform robot path planning for the delivery of medicines to a patient, and measure the patient body parameters, through a wireless BLE (Bluetooth Low Energy) beacon receiver and the BLE beacon transmitter at the respective patient rooms. Assuming that a wireless beacon is kept at the patient room, the robot follows the beacon's signal, identifies the respective room and delivers the needed medicine to the patient. A new fuzzy controller system which consists of three ultrasonic sensors and one camera is developed to detect the optimal robot path and to avoid the robot collision with stable and moving obstacles. The fuzzy controller effectively detects obstacles in the robot's vicinity and makes proper decisions for avoiding them. The navigation of the robot is implemented on a BLE tag module by using the AOA (Angle of Arrival) method. The robot uses sensors to measure the patient body parameters and updates these data to the hospital patient database system in a private cloud mode. It also makes uses of a Google assistant to interact with the patients. The robotic system was implemented on the Raspberry Pi using Matlab 2018b. The system performance was evaluated on a PC with an Intel Core i5 processor, while the solar power was used to power the system. Several sensors, namely HC-SR04 ultrasonic sensor, Logitech HD 720p image sensor, a temperature sensor and a heart rate sensor are used together with a camera to generate datasets for testing the proposed system. In particular, the system was tested on operations taking place in the context of a private hospital in Tirunelveli, Tamilnadu, India. A detailed comparison is performed, through some performance metrics, such as Correlation, Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE), against the related works of Deepu et al., Huh and Seo, Chinmayi et al., Alli et al., Xu, Ran et al., and Lee et al. The experimental system validation showed that the fuzzy controller achieves very high accuracy in obstacle detection and avoidance, with a very low computational time for taking directional decisions. Moreover, the experimental results demonstrated that the robotic system achieves superior accuracy in detecting/avoiding obstacles compared to other systems of similar purposes presented in the related works.

RevDate: 2021-08-04

Antaki F, Coussa RG, Kahwati G, et al (2021)

Accuracy of automated machine learning in classifying retinal pathologies from ultra-widefield pseudocolour fundus images.

The British journal of ophthalmology pii:bjophthalmol-2021-319030 [Epub ahead of print].

AIMS: Automated machine learning (AutoML) is a novel tool in artificial intelligence (AI). This study assessed the discriminative performance of AutoML in differentiating retinal vein occlusion (RVO), retinitis pigmentosa (RP) and retinal detachment (RD) from normal fundi using ultra-widefield (UWF) pseudocolour fundus images.

METHODS: Two ophthalmologists without coding experience carried out AutoML model design using a publicly available image data set (2137 labelled images). The data set was reviewed for low-quality and mislabeled images and then uploaded to the Google Cloud AutoML Vision platform for training and testing. We designed multiple binary models to differentiate RVO, RP and RD from normal fundi and compared them to bespoke models obtained from the literature. We then devised a multiclass model to detect RVO, RP and RD. Saliency maps were generated to assess the interpretability of the model.

RESULTS: The AutoML models demonstrated high diagnostic properties in the binary classification tasks that were generally comparable to bespoke deep-learning models (area under the precision-recall curve (AUPRC) 0.921-1, sensitivity 84.91%-89.77%, specificity 78.72%-100%). The multiclass AutoML model had an AUPRC of 0.876, a sensitivity of 77.93% and a positive predictive value of 82.59%. The per-label sensitivity and specificity, respectively, were normal fundi (91.49%, 86.75%), RVO (83.02%, 92.50%), RP (72.00%, 100%) and RD (79.55%,96.80%).

CONCLUSION: AutoML models created by ophthalmologists without coding experience can detect RVO, RP and RD in UWF images with very good diagnostic accuracy. The performance was comparable to bespoke deep-learning models derived by AI experts for RVO and RP but not for RD.

RevDate: 2021-08-04

Tajalli SZ, Kavousi-Fard A, Mardaneh M, et al (2021)

Uncertainty-Aware Management of Smart Grids Using Cloud-Based LSTM-Prediction Interval.

IEEE transactions on cybernetics, PP: [Epub ahead of print].

This article introduces an uncertainty-aware cloud-fog-based framework for power management of smart grids using a multiagent-based system. The power management is a social welfare optimization problem. A multiagent-based algorithm is suggested to solve this problem, in which agents are defined as volunteering consumers and dispatchable generators. In the proposed method, every consumer can voluntarily put a price on its power demand at each interval of operation to benefit from the equal opportunity of contributing to the power management process provided for all generation and consumption units. In addition, the uncertainty analysis using a deep learning method is also applied in a distributive way with the local calculation of prediction intervals for sources with stochastic nature in the system, such as loads, small wind turbines (WTs), and rooftop photovoltaics (PVs). Using the predicted ranges of load demand and stochastic generation outputs, a range for power consumption/generation is also provided for each agent called ``preparation range'' to demonstrate the predicted boundary, where the accepted power consumption/generation of an agent might occur, considering the uncertain sources. Besides, fog computing is deployed as a critical infrastructure for fast calculation and providing local storage for reasonable pricing. Cloud services are also proposed for virtual applications as efficient databases and computation units. The performance of the proposed framework is examined on two smart grid test systems and compared with other well-known methods. The results prove the capability of the proposed method to obtain the optimal outcomes in a short time for any scale of grid.

RevDate: 2021-08-26

Marques G, Leswing K, Robertson T, et al (2021)

De Novo Design of Molecules with Low Hole Reorganization Energy Based on a Quarter-Million Molecule DFT Screen.

The journal of physical chemistry. A, 125(33):7331-7343.

Materials exhibiting higher mobilities than conventional organic semiconducting materials such as fullerenes and fused thiophenes are in high demand for applications in printed electronics. To discover new molecules in the heteroacene family that might show improved hole mobility, three de novo design methods were applied. Machine learning (ML) models were generated based on previously calculated hole reorganization energies of a quarter million examples of heteroacenes, where the energies were calculated by applying density functional theory (DFT) and a massive cloud computing environment. The three generative methods applied were (1) the continuous space method, where molecular structures are converted into continuous variables by applying the variational autoencoder/decoder technique; (2) the method based on reinforcement learning of SMILES strings (the REINVENT method); and (3) the junction tree variational autoencoder method that directly generates molecular graphs. Among the three methods, the second and third methods succeeded in obtaining chemical structures whose DFT-calculated hole reorganization energy was lower than the lowest energy in the training dataset. This suggests that an extrapolative materials design protocol can be developed by applying generative modeling to a quantitative structure-property relationship (QSPR) utility function.

RevDate: 2021-08-03
CmpDate: 2021-08-03

Du Z, H Miao (2021)

Research on Edge Service Composition Method Based on BAS Algorithm.

Computational intelligence and neuroscience, 2021:9931689.

Edge services are transferred data processing, application running, and implementation of some functional services from cloud central server to network edge server to provide services. Combined edge service can effectively reduce task computation in the cloud, shorten transmission distance of processing data, quickly decompose task of service request, and select the optimal edge service combination to provide service for users. BAS is an efficient intelligent optimization algorithm, which can achieve efficient optimization and neither need to know the specific form of function nor need gradient information. This paper designs an edge service composition model based on edge computing and proposes a method about edge service composition by BAS optimization algorithm. Our proposed method has obvious advantages in service composition efficiency compared with service composition method based on PSO or WPA heuristic algorithm. Compared with cloud service composition method, our proposed method has advantages of shorter service response time, low cost, and high quality of user experience.

RevDate: 2021-09-24
CmpDate: 2021-09-24

Wang Y, Murlidaran S, DA Pearlman (2021)

Quantum simulations of SARS-CoV-2 main protease Mpro enable high-quality scoring of diverse ligands.

Journal of computer-aided molecular design, 35(9):963-971.

The COVID-19 pandemic has led to unprecedented efforts to identify drugs that can reduce its associated morbidity/mortality rate. Computational chemistry approaches hold the potential for triaging potential candidates far more quickly than their experimental counterparts. These methods have been widely used to search for small molecules that can inhibit critical proteins involved in the SARS-CoV-2 replication cycle. An important target is the SARS-CoV-2 main protease Mpro, an enzyme that cleaves the viral polyproteins into individual proteins required for viral replication and transcription. Unfortunately, standard computational screening methods face difficulties in ranking diverse ligands to a receptor due to disparate ligand scaffolds and varying charge states. Here, we describe full density functional quantum mechanical (DFT) simulations of Mpro in complex with various ligands to obtain absolute ligand binding energies. Our calculations are enabled by a new cloud-native parallel DFT implementation running on computational resources from Amazon Web Services (AWS). The results we obtain are promising: the approach is quite capable of scoring a very diverse set of existing drug compounds for their affinities to M pro and suggest the DFT approach is potentially more broadly applicable to repurpose screening against this target. In addition, each DFT simulation required only ~ 1 h (wall clock time) per ligand. The fast turnaround time raises the practical possibility of a broad application of large-scale quantum mechanics in the drug discovery pipeline at stages where ligand diversity is essential.

RevDate: 2021-07-31

Li X, Ren S, F Gu (2021)

Medical Internet of Things to Realize Elderly Stroke Prevention and Nursing Management.

Journal of healthcare engineering, 2021:9989602.

Stroke is a major disease that seriously endangers the lives and health of middle-aged and elderly people in our country, but its implementation of secondary prevention needs to be improved urgently. The application of IoT technology in home health monitoring and telemedicine, as well as the popularization of cloud computing, contributes to the early identification of ischemic stroke and provides intelligent, humanized, and preventive medical and health services for patients at high risk of stroke. This article clarifies the networking structure and networking objects of the rehabilitation system Internet of Things, clarifies the functions of each part, and establishes an overall system architecture based on smart medical care; the design and optimization of the mechanical part of the stroke rehabilitation robot are carried out, as well as kinematics and dynamic analysis. According to the functions of different types of stroke rehabilitation robots, strategies are given for the use of lower limb rehabilitation robots; standardized codes are used to identify system objects, and RFID technology is used to automatically identify users and devices. Combined with the use of the Internet and GSM mobile communication network, construct a network database of system networking objects and, on this basis, establish information management software based on a smart medical rehabilitation system that takes care of both doctors and patients to realize the system's Internet of Things architecture. In addition, this article also gives the recovery strategy generation in the system with the design method of resource scheduling method and the theoretical algorithm of rehabilitation strategy generation is given and verified. This research summarizes the application background, advantages, and past practice of the Internet of Things in stroke medical care, develops and applies a medical collaborative cloud computing system for systematic intervention of stroke, and realizes the module functions such as information sharing, regional monitoring, and collaborative consultation within the base.

RevDate: 2021-07-31

Mrozek D, Stępień K, Grzesik P, et al (2021)

A Large-Scale and Serverless Computational Approach for Improving Quality of NGS Data Supporting Big Multi-Omics Data Analyses.

Frontiers in genetics, 12:699280.

Various types of analyses performed over multi-omics data are driven today by next-generation sequencing (NGS) techniques that produce large volumes of DNA/RNA sequences. Although many tools allow for parallel processing of NGS data in a Big Data distributed environment, they do not facilitate the improvement of the quality of NGS data for a large scale in a simple declarative manner. Meanwhile, large sequencing projects and routine DNA/RNA sequencing associated with molecular profiling of diseases for personalized treatment require both good quality data and appropriate infrastructure for efficient storing and processing of the data. To solve the problems, we adapt the concept of Data Lake for storing and processing big NGS data. We also propose a dedicated library that allows cleaning the DNA/RNA sequences obtained with single-read and paired-end sequencing techniques. To accommodate the growth of NGS data, our solution is largely scalable on the Cloud and may rapidly and flexibly adjust to the amount of data that should be processed. Moreover, to simplify the utilization of the data cleaning methods and implementation of other phases of data analysis workflows, our library extends the declarative U-SQL query language providing a set of capabilities for data extraction, processing, and storing. The results of our experiments prove that the whole solution supports requirements for ample storage and highly parallel, scalable processing that accompanies NGS-based multi-omics data analyses.

RevDate: 2021-09-03
CmpDate: 2021-09-03

Ashammakhi N, Unluturk BD, Kaarela O, et al (2021)

The Cells and the Implant Interact With the Biological System Via the Internet and Cloud Computing as the New Mediator.

The Journal of craniofacial surgery, 32(5):1655-1657.

RevDate: 2021-08-09

Niemann M, Lachmann N, Geneugelijk K, et al (2021)

Computational Eurotransplant kidney allocation simulations demonstrate the feasibility and benefit of T-cell epitope matching.

PLoS computational biology, 17(7):e1009248.

The EuroTransplant Kidney Allocation System (ETKAS) aims at allocating organs to patients on the waiting list fairly whilst optimizing HLA match grades. ETKAS currently considers the number of HLA-A, -B, -DR mismatches. Evidently, epitope matching is biologically and clinically more relevant. We here executed ETKAS-based computer simulations to evaluate the impact of epitope matching on allocation and compared the strategies. A virtual population of 400,000 individuals was generated using the National Marrow Donor Program (NMDP) haplotype frequency dataset of 2011. Using this population, a waiting list of 10,400 patients was constructed and maintained during simulation, matching the 2015 Eurotransplant Annual Report characteristics. Unacceptable antigens were assigned randomly relative to their frequency using HLAMatchmaker. Over 22,600 kidneys were allocated in 10 years in triplicate using Markov Chain Monte Carlo simulations on 32-CPU-core cloud-computing instances. T-cell epitopes were calculated using the www.pirche.com portal. Waiting list effects were evaluated against ETKAS for five epitope matching scenarios. Baseline simulations of ETKAS slightly overestimated reported average HLA match grades. The best balanced scenario maintained prioritisation of HLA A-B-DR fully matched donors while replacing the HLA match grade by PIRCHE-II score and exchanging the HLA mismatch probability (MMP) by epitope MMP. This setup showed no considerable impact on kidney exchange rates and waiting time. PIRCHE-II scores improved, whereas the average HLA match grade diminishes slightly, yet leading to an improved estimated graft survival. We conclude that epitope-based matching in deceased donor kidney allocation is feasible while maintaining equal balances on the waiting list.

RevDate: 2021-07-28

Aslam B, Javed AR, Chakraborty C, et al (2021)

Blockchain and ANFIS empowered IoMT application for privacy preserved contact tracing in COVID-19 pandemic.

Personal and ubiquitous computing [Epub ahead of print].

Life-threatening novel severe acute respiratory syndrome coronavirus (SARS-CoV-2), also known as COVID-19, has engulfed the world and caused health and economic challenges. To control the spread of COVID-19, a mechanism is required to enforce physical distancing between people. This paper proposes a Blockchain-based framework that preserves patients' anonymity while tracing their contacts with the help of Bluetooth-enabled smartphones. We use a smartphone application to interact with the proposed blockchain framework for contact tracing of the general public using Bluetooth and to store the obtained data over the cloud, which is accessible to health departments and government agencies to perform necessary and timely actions (e.g., like quarantine the infected people moving around). Thus, the proposed framework helps people perform their regular business and day-to-day activities with a controlled mechanism that keeps them safe from infected and exposed people. The smartphone application is capable enough to check their COVID status after analyzing the symptoms quickly and observes (based on given symptoms) either this person is infected or not. As a result, the proposed Adaptive Neuro-Fuzzy Interference System (ANFIS) system predicts the COVID status, and K-Nearest Neighbor (KNN) enhances the accuracy rate to 95.9% compared to state-of-the-art results.

RevDate: 2021-07-27

Silva Junior D, Pacitti E, Paes A, et al (2021)

Provenance-and machine learning-based recommendation of parameter values in scientific workflows.

PeerJ. Computer science, 7:e606.

Scientific Workflows (SWfs) have revolutionized how scientists in various domains of science conduct their experiments. The management of SWfs is performed by complex tools that provide support for workflow composition, monitoring, execution, capturing, and storage of the data generated during execution. In some cases, they also provide components to ease the visualization and analysis of the generated data. During the workflow's composition phase, programs must be selected to perform the activities defined in the workflow specification. These programs often require additional parameters that serve to adjust the program's behavior according to the experiment's goals. Consequently, workflows commonly have many parameters to be manually configured, encompassing even more than one hundred in many cases. Wrongly parameters' values choosing can lead to crash workflows executions or provide undesired results. As the execution of data- and compute-intensive workflows is commonly performed in a high-performance computing environment e.g., (a cluster, a supercomputer, or a public cloud), an unsuccessful execution configures a waste of time and resources. In this article, we present FReeP-Feature Recommender from Preferences, a parameter value recommendation method that is designed to suggest values for workflow parameters, taking into account past user preferences. FReeP is based on Machine Learning techniques, particularly in Preference Learning. FReeP is composed of three algorithms, where two of them aim at recommending the value for one parameter at a time, and the third makes recommendations for n parameters at once. The experimental results obtained with provenance data from two broadly used workflows showed FReeP usefulness in the recommendation of values for one parameter. Furthermore, the results indicate the potential of FReeP to recommend values for n parameters in scientific workflows.

RevDate: 2021-07-27

Skarlat O, S Schulte (2021)

FogFrame: a framework for IoT application execution in the fog.

PeerJ. Computer science, 7:e588.

Recently, a multitude of conceptual architectures and theoretical foundations for fog computing have been proposed. Despite this, there is still a lack of concrete frameworks to setup real-world fog landscapes. In this work, we design and implement the fog computing framework FogFrame-a system able to manage and monitor edge and cloud resources in fog landscapes and to execute Internet of Things (IoT) applications. FogFrame provides communication and interaction as well as application management within a fog landscape, namely, decentralized service placement, deployment and execution. For service placement, we formalize a system model, define an objective function and constraints, and solve the problem implementing a greedy algorithm and a genetic algorithm. The framework is evaluated with regard to Quality of Service parameters of IoT applications and the utilization of fog resources using a real-world operational testbed. The evaluation shows that the service placement is adapted according to the demand and the available resources in the fog landscape. The greedy placement leads to the maximum utilization of edge devices keeping at the edge as many services as possible, while the placement based on the genetic algorithm keeps devices from overloads by balancing between the cloud and edge. When comparing edge and cloud deployment, the service deployment time at the edge takes 14% of the deployment time in the cloud. If fog resources are utilized at maximum capacity, and a new application request arrives with the need of certain sensor equipment, service deployment becomes impossible, and the application needs to be delegated to other fog resources. The genetic algorithm allows to better accommodate new applications and keep the utilization of edge devices at about 50% CPU. During the experiments, the framework successfully reacts to runtime events: (i) services are recovered when devices disappear from the fog landscape; (ii) cloud resources and highly utilized devices are released by migrating services to new devices; (iii) and in case of overloads, services are migrated in order to release resources.

RevDate: 2021-07-27
CmpDate: 2021-07-27

Sauber AM, Awad A, Shawish AF, et al (2021)

A Novel Hadoop Security Model for Addressing Malicious Collusive Workers.

Computational intelligence and neuroscience, 2021:5753948.

With the daily increase of data production and collection, Hadoop is a platform for processing big data on a distributed system. A master node globally manages running jobs, whereas worker nodes process partitions of the data locally. Hadoop uses MapReduce as an effective computing model. However, Hadoop experiences a high level of security vulnerability over hybrid and public clouds. Specially, several workers can fake results without actually processing their portions of the data. Several redundancy-based approaches have been proposed to counteract this risk. A replication mechanism is used to duplicate all or some of the tasks over multiple workers (nodes). A drawback of such approaches is that they generate a high overhead over the cluster. Additionally, malicious workers can behave well for a long period of time and attack later. This paper presents a novel model to enhance the security of the cloud environment against untrusted workers. A new component called malicious workers' trap (MWT) is developed to run on the master node to detect malicious (noncollusive and collusive) workers as they convert and attack the system. An implementation to test the proposed model and to analyze the performance of the system shows that the proposed model can accurately detect malicious workers with minor processing overhead compared to vanilla MapReduce and Verifiable MapReduce (V-MR) model [1]. In addition, MWT maintains a balance between the security and usability of the Hadoop cluster.

RevDate: 2021-07-27

Tariq MU, Poulin M, AA Abonamah (2021)

Achieving Operational Excellence Through Artificial Intelligence: Driving Forces and Barriers.

Frontiers in psychology, 12:686624.

This paper presents an in-depth literature review on the driving forces and barriers for achieving operational excellence through artificial intelligence (AI). Artificial intelligence is a technological concept spanning operational management, philosophy, humanities, statistics, mathematics, computer sciences, and social sciences. AI refers to machines mimicking human behavior in terms of cognitive functions. The evolution of new technological procedures and advancements in producing intelligence for machines creates a positive impact on decisions, operations, strategies, and management incorporated in the production process of goods and services. Businesses develop various methods and solutions to extract meaningful information, such as big data, automatic production capabilities, and systematization for business improvement. The progress in organizational competitiveness is apparent through improvements in firm's decisions, resulting in increased operational efficiencies. Innovation with AI has enabled small businesses to reduce operating expenses and increase revenues. The focused literature review reveals the driving forces for achieving operational excellence through AI are improvement in computing abilities of machines, development of data-based AI, advancements in deep learning, cloud computing, data management, and integration of AI in operations. The barriers are mainly cultural constraints, fear of the unknown, lack of employee skills, and strategic planning for adopting AI. The current paper presents an analysis of articles focused on AI adoption in production and operations. We selected articles published between 2015 and 2020. Our study contributes to the literature reviews on operational excellence, artificial intelligence, driving forces for AI, and AI barriers in achieving operational excellence.

RevDate: 2021-07-27

Sharma SK, SS Ahmed (2021)

IoT-based analysis for controlling & spreading prediction of COVID-19 in Saudi Arabia.

Soft computing [Epub ahead of print].

Presently, novel coronavirus outbreak 2019 (COVID-19) is a major threat to public health. Mathematical epidemic models can be utilized to forecast the course of an epidemic and cultivate approaches for controlling it. This paper utilizes the real data of spreading COVID-19 in Saudi Arabia for mathematical modeling and complex analyses. This paper introduces the Susceptible, Exposed, Infectious, Recovered, Undetectable, and Deceased (SEIRUD) and Machine learning algorithm to predict and control COVID-19 in Saudi Arabia.This COVID-19 has initiated many methods, such as cloud computing, edge-computing, IoT, artificial intelligence. The use of sensor devices has increased enormously. Similarly, several developments in solving the COVID-19 crisis have been used by IoT applications. The new technology relies on IoT variables and the roles of symptoms using wearable sensors to forecast cases of COVID-19. The working model involves wearable devices, occupational therapy, condition control, testing of cases, suspicious and IoT elements. Mathematical modeling is useful for understanding the fundamental principle of the transmission of COVID-19 and providing guidance for possible predictions. The method suggested predicts whether COVID-19 would expand or die in the long term in the population. The mathematical study results and related simulation are described here as a way of forecasting the progress and the possible end of the epidemic with three forms of scenarios: 'No Action,' 'Lockdowns and New Medicine.' The lock case slows it down the peak by minimizing infection and impacts area equality of the infected deformation. This study familiarizes the ideal protocol, which can support the Saudi population to breakdown spreading COVID-19 in an accurate and timely way. The simulation findings have been executed, and the suggested model enhances the accuracy ratio of 89.3%, prediction ratio of 88.7%, the precision ratio of 87.7%, recall ratio of 86.4%, and F1 score of 90.9% compared to other existing methods.

RevDate: 2021-07-29
CmpDate: 2021-07-27

Huč A, Šalej J, M Trebar (2021)

Analysis of Machine Learning Algorithms for Anomaly Detection on Edge Devices.

Sensors (Basel, Switzerland), 21(14):.

The Internet of Things (IoT) consists of small devices or a network of sensors, which permanently generate huge amounts of data. Usually, they have limited resources, either computing power or memory, which means that raw data are transferred to central systems or the cloud for analysis. Lately, the idea of moving intelligence to the IoT is becoming feasible, with machine learning (ML) moved to edge devices. The aim of this study is to provide an experimental analysis of processing a large imbalanced dataset (DS2OS), split into a training dataset (80%) and a test dataset (20%). The training dataset was reduced by randomly selecting a smaller number of samples to create new datasets Di (i = 1, 2, 5, 10, 15, 20, 40, 60, 80%). Afterwards, they were used with several machine learning algorithms to identify the size at which the performance metrics show saturation and classification results stop improving with an F1 score equal to 0.95 or higher, which happened at 20% of the training dataset. Further on, two solutions for the reduction of the number of samples to provide a balanced dataset are given. In the first, datasets DRi consist of all anomalous samples in seven classes and a reduced majority class ('NL') with i = 0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20 percent of randomly selected samples. In the second, datasets DCi are generated from the representative samples determined with clustering from the training dataset. All three dataset reduction methods showed comparable performance results. Further evaluation of training times and memory usage on Raspberry Pi 4 shows a possibility to run ML algorithms with limited sized datasets on edge devices.

RevDate: 2021-07-29
CmpDate: 2021-07-27

Yar H, Imran AS, Khan ZA, et al (2021)

Towards Smart Home Automation Using IoT-Enabled Edge-Computing Paradigm.

Sensors (Basel, Switzerland), 21(14):.

Smart home applications are ubiquitous and have gained popularity due to the overwhelming use of Internet of Things (IoT)-based technology. The revolution in technologies has made homes more convenient, efficient, and even more secure. The need for advancement in smart home technology is necessary due to the scarcity of intelligent home applications that cater to several aspects of the home simultaneously, i.e., automation, security, safety, and reducing energy consumption using less bandwidth, computation, and cost. Our research work provides a solution to these problems by deploying a smart home automation system with the applications mentioned above over a resource-constrained Raspberry Pi (RPI) device. The RPI is used as a central controlling unit, which provides a cost-effective platform for interconnecting a variety of devices and various sensors in a home via the Internet. We propose a cost-effective integrated system for smart home based on IoT and Edge-Computing paradigm. The proposed system provides remote and automatic control to home appliances, ensuring security and safety. Additionally, the proposed solution uses the edge-computing paradigm to store sensitive data in a local cloud to preserve the customer's privacy. Moreover, visual and scalar sensor-generated data are processed and held over edge device (RPI) to reduce bandwidth, computation, and storage cost. In the comparison with state-of-the-art solutions, the proposed system is 5% faster in detecting motion, and 5 ms and 4 ms in switching relay on and off, respectively. It is also 6% more efficient than the existing solutions with respect to energy consumption.

RevDate: 2021-07-29
CmpDate: 2021-07-27

Kosasih DI, Lee BG, Lim H, et al (2021)

An Unsupervised Learning-Based Spatial Co-Location Detection System from Low-Power Consumption Sensor.

Sensors (Basel, Switzerland), 21(14):.

Spatial co-location detection is the task of inferring the co-location of two or more objects in the geographic space. Mobile devices, especially a smartphone, are commonly employed to accomplish this task with the human object. Previous work focused on analyzing mobile GPS data to accomplish this task. While this approach may guarantee high accuracy from the perspective of the data, it is considered inefficient since knowing the object's absolute geographic location is not required to accomplish this task. This work proposed the implementation of the unsupervised learning-based algorithm, namely convolutional autoencoder, to infer the co-location of people from a low-power consumption sensor data-magnetometer readings. The idea is that if the trained model can also reconstruct the other data with the structural similarity (SSIM) index being above 0.5, we can then conclude that the observed individuals were co-located. The evaluation of our system has indicated that the proposed approach could recognize the spatial co-location of people from magnetometer readings.

RevDate: 2021-07-29
CmpDate: 2021-07-27

Alhasnawi BN, Jasim BH, Rahman ZSA, et al (2021)

A Novel Robust Smart Energy Management and Demand Reduction for Smart Homes Based on Internet of Energy.

Sensors (Basel, Switzerland), 21(14):.

In residential energy management (REM), Time of Use (ToU) of devices scheduling based on user-defined preferences is an essential task performed by the home energy management controller. This paper devised a robust REM technique capable of monitoring and controlling residential loads within a smart home. In this paper, a new distributed multi-agent framework based on the cloud layer computing architecture is developed for real-time microgrid economic dispatch and monitoring. In this paper the grey wolf optimizer (GWO), artificial bee colony (ABC) optimization algorithm-based Time of Use (ToU) pricing model is proposed to define the rates for shoulder-peak and on-peak hours. The results illustrate the effectiveness of the proposed the grey wolf optimizer (GWO), artificial bee colony (ABC) optimization algorithm based ToU pricing scheme. A Raspberry Pi3 based model of a well-known test grid topology is modified to support real-time communication with open-source IoE platform Node-Red used for cloud computing. Two levels communication system connects microgrid system, implemented in Raspberry Pi3, to cloud server. The local communication level utilizes IP/TCP and MQTT is used as a protocol for global communication level. The results demonstrate and validate the effectiveness of the proposed technique, as well as the capability to track the changes of load with the interactions in real-time and the fast convergence rate.

RevDate: 2021-07-29
CmpDate: 2021-07-27

Stan OP, Enyedi S, Corches C, et al (2021)

Method to Increase Dependability in a Cloud-Fog-Edge Environment.

Sensors (Basel, Switzerland), 21(14):.

Robots can be very different, from humanoids to intelligent self-driving cars or just IoT systems that collect and process local sensors' information. This paper presents a way to increase dependability for information exchange and processing in systems with Cloud-Fog-Edge architectures. In an ideal interconnected world, the recognized and registered robots must be able to communicate with each other if they are close enough, or through the Fog access points without overloading the Cloud. In essence, the presented work addresses the Edge area and how the devices can communicate in a safe and secure environment using cryptographic methods for structured systems. The presented work emphasizes the importance of security in a system's dependability and offers a communication mechanism for several robots without overburdening the Cloud. This solution is ideal to be used where various monitoring and control aspects demand extra degrees of safety. The extra private keys employed by this procedure further enhance algorithm complexity, limiting the probability that the method may be broken by brute force or systemic attacks.

RevDate: 2021-07-29
CmpDate: 2021-07-27

Brescia E, Costantino D, Marzo F, et al (2021)

Automated Multistep Parameter Identification of SPMSMs in Large-Scale Applications Using Cloud Computing Resources.

Sensors (Basel, Switzerland), 21(14): pii:s21144699.

Parameter identification of permanent magnet synchronous machines (PMSMs) represents a well-established research area. However, parameter estimation of multiple running machines in large-scale applications has not yet been investigated. In this context, a flexible and automated approach is required to minimize complexity, costs, and human interventions without requiring machine information. This paper proposes a novel identification strategy for surface PMSMs (SPMSMs), highly suitable for large-scale systems. A novel multistep approach using measurement data at different operating conditions of the SPMSM is proposed to perform the parameter identification without requiring signal injection, extra sensors, machine information, and human interventions. Thus, the proposed method overcomes numerous issues of the existing parameter identification schemes. An IoT/cloud architecture is designed to implement the proposed multistep procedure and massively perform SPMSM parameter identifications. Finally, hardware-in-the-loop results show the effectiveness of the proposed approach.

RevDate: 2021-08-03

Hanussek M, Bartusch F, J Krüger (2021)

Performance and scaling behavior of bioinformatic applications in virtualization environments to create awareness for the efficient use of compute resources.

PLoS computational biology, 17(7):e1009244.

The large amount of biological data available in the current times, makes it necessary to use tools and applications based on sophisticated and efficient algorithms, developed in the area of bioinformatics. Further, access to high performance computing resources is necessary, to achieve results in reasonable time. To speed up applications and utilize available compute resources as efficient as possible, software developers make use of parallelization mechanisms, like multithreading. Many of the available tools in bioinformatics offer multithreading capabilities, but more compute power is not always helpful. In this study we investigated the behavior of well-known applications in bioinformatics, regarding their performance in the terms of scaling, different virtual environments and different datasets with our benchmarking tool suite BOOTABLE. The tool suite includes the tools BBMap, Bowtie2, BWA, Velvet, IDBA, SPAdes, Clustal Omega, MAFFT, SINA and GROMACS. In addition we added an application using the machine learning framework TensorFlow. Machine learning is not directly part of bioinformatics but applied to many biological problems, especially in the context of medical images (X-ray photographs). The mentioned tools have been analyzed in two different virtual environments, a virtual machine environment based on the OpenStack cloud software and in a Docker environment. The gained performance values were compared to a bare-metal setup and among each other. The study reveals, that the used virtual environments produce an overhead in the range of seven to twenty-five percent compared to the bare-metal environment. The scaling measurements showed, that some of the analyzed tools do not benefit from using larger amounts of computing resources, whereas others showed an almost linear scaling behavior. The findings of this study have been generalized as far as possible and should help users to find the best amount of resources for their analysis. Further, the results provide valuable information for resource providers to handle their resources as efficiently as possible and raise the user community's awareness of the efficient usage of computing resources.

RevDate: 2021-07-23
CmpDate: 2021-07-22

Zeng X, Zhang X, Yang S, et al (2021)

Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices.

Sensors (Basel, Switzerland), 21(13):.

Implicit authentication mechanisms are expected to prevent security and privacy threats for mobile devices using behavior modeling. However, recently, researchers have demonstrated that the performance of behavioral biometrics is insufficiently accurate. Furthermore, the unique characteristics of mobile devices, such as limited storage and energy, make it subject to constrained capacity of data collection and processing. In this paper, we propose an implicit authentication architecture based on edge computing, coined Edge computing-based mobile Device Implicit Authentication (EDIA), which exploits edge-based gait biometric identification using a deep learning model to authenticate users. The gait data captured by a device's accelerometer and gyroscope sensors is utilized as the input of our optimized model, which consists of a CNN and a LSTM in tandem. Especially, we deal with extracting the features of gait signal in a two-dimensional domain through converting the original signal into an image, and then input it into our network. In addition, to reduce computation overhead of mobile devices, the model for implicit authentication is generated on the cloud server, and the user authentication process also takes place on the edge devices. We evaluate the performance of EDIA under different scenarios where the results show that i) we achieve a true positive rate of 97.77% and also a 2% false positive rate; and ii) EDIA still reaches high accuracy with limited dataset size.

RevDate: 2021-07-23
CmpDate: 2021-07-22

Alwateer M, Almars AM, Areed KN, et al (2021)

Ambient Healthcare Approach with Hybrid Whale Optimization Algorithm and Naïve Bayes Classifier.

Sensors (Basel, Switzerland), 21(13):.

There is a crucial need to process patient's data immediately to make a sound decision rapidly; this data has a very large size and excessive features. Recently, many cloud-based IoT healthcare systems are proposed in the literature. However, there are still several challenges associated with the processing time and overall system efficiency concerning big healthcare data. This paper introduces a novel approach for processing healthcare data and predicts useful information with the support of the use of minimum computational cost. The main objective is to accept several types of data and improve accuracy and reduce the processing time. The proposed approach uses a hybrid algorithm which will consist of two phases. The first phase aims to minimize the number of features for big data by using the Whale Optimization Algorithm as a feature selection technique. After that, the second phase performs real-time data classification by using Naïve Bayes Classifier. The proposed approach is based on fog Computing for better business agility, better security, deeper insights with privacy, and reduced operation cost. The experimental results demonstrate that the proposed approach can reduce the number of datasets features, improve the accuracy and reduce the processing time. Accuracy enhanced by average rate: 3.6% (3.34 for Diabetes, 2.94 for Heart disease, 3.77 for Heart attack prediction, and 4.15 for Sonar). Besides, it enhances the processing speed by reducing the processing time by an average rate: 8.7% (28.96 for Diabetes, 1.07 for Heart disease, 3.31 for Heart attack prediction, and 1.4 for Sonar).

RevDate: 2021-07-23

Agapiou A, V Lysandrou (2021)

Observing Thermal Conditions of Historic Buildings through Earth Observation Data and Big Data Engine.

Sensors (Basel, Switzerland), 21(13):.

This study combines satellite observation, cloud platforms, and geographical information systems (GIS) to investigate at a macro-scale level of observation the thermal conditions of two historic clusters in Cyprus, namely in Limassol and Strovolos municipalities. The two case studies share different environmental and climatic conditions. The former site is coastal, the last a hinterland, and they both contain historic buildings with similar building materials and techniques. For the needs of the study, more than 140 Landsat 7 ETM+ and 8 LDCM images were processed at the Google Earth Engine big data cloud platform to investigate the thermal conditions of the two historic clusters over the period 2013-2020. The multi-temporal thermal analysis included the calibration of all images to provide land surface temperature (LST) products at a 100 m spatial resolution. Moreover, to investigate anomalies related to possible land cover changes of the area, two indices were extracted from the satellite images, the normalised difference vegetation index (NDVI) and the normalised difference build index (NDBI). Anticipated results include the macro-scale identification of multi-temporal changes, diachronic changes, the establishment of change patterns based on seasonality and location, occurring in large clusters of historic buildings.

RevDate: 2021-07-23
CmpDate: 2021-07-22

Moon J, Yang M, J Jeong (2021)

A Novel Approach to the Job Shop Scheduling Problem Based on the Deep Q-Network in a Cooperative Multi-Access Edge Computing Ecosystem.

Sensors (Basel, Switzerland), 21(13):.

In this study, based on multi-access edge computing (MEC), we provided the possibility of cooperating manufacturing processes. We tried to solve the job shop scheduling problem by applying DQN (deep Q-network), a reinforcement learning model, to this method. Here, to alleviate the overload of computing resources, an efficient DQN was used for the experiments using transfer learning data. Additionally, we conducted scheduling studies in the edge computing ecosystem of our manufacturing processes without the help of cloud centers. Cloud computing, an environment in which scheduling processing is performed, has issues sensitive to the manufacturing process in general, such as security issues and communication delay time, and research is being conducted in various fields, such as the introduction of an edge computing system that can replace them. We proposed a method of independently performing scheduling at the edge of the network through cooperative scheduling between edge devices within a multi-access edge computing structure. The proposed framework was evaluated, analyzed, and compared with existing frameworks in terms of providing solutions and services.

RevDate: 2021-07-23
CmpDate: 2021-07-22

Chen L, Grimstead I, Bell D, et al (2021)

Estimating Vehicle and Pedestrian Activity from Town and City Traffic Cameras.

Sensors (Basel, Switzerland), 21(13):.

Traffic cameras are a widely available source of open data that offer tremendous value to public authorities by providing real-time statistics to understand and monitor the activity levels of local populations and their responses to policy interventions such as those seen during the COrona VIrus Disease 2019 (COVID-19) pandemic. This paper presents an end-to-end solution based on the Google Cloud Platform with scalable processing capability to deal with large volumes of traffic camera data across the UK in a cost-efficient manner. It describes a deep learning pipeline to detect pedestrians and vehicles and to generate mobility statistics from these. It includes novel methods for data cleaning and post-processing using a Structure SImilarity Measure (SSIM)-based static mask that improves reliability and accuracy in classifying people and vehicles from traffic camera images. The solution resulted in statistics describing trends in the 'busyness' of various towns and cities in the UK. We validated time series against Automatic Number Plate Recognition (ANPR) cameras across North East England, showing a close correlation between our statistical output and the ANPR source. Trends were also favorably compared against traffic flow statistics from the UK's Department of Transport. The results of this work have been adopted as an experimental faster indicator of the impact of COVID-19 on the UK economy and society by the Office for National Statistics (ONS).

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )