picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
30 Jun 2022 at 01:37
HITS:
2522
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 30 Jun 2022 at 01:37 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: cloud[TIAB] and (computing[TIAB] or "amazon web services"[TIAB] or google[TIAB] or "microsoft azure"[TIAB]) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2022-06-27

Wu Z, Xuan S, Xie J, et al (2022)

How to ensure the confidentiality of electronic medical records on the cloud: A technical perspective.

Computers in biology and medicine, 147:105726 pii:S0010-4825(22)00504-2 [Epub ahead of print].

From a technical perspective, for electronic medical records (EMR), this paper proposes an effective confidential management solution on the cloud, whose basic idea is to deploy a trusted local server between the untrusted cloud and each trusted client of a medical information management system, responsible for running an EMR cloud hierarchical storage model and an EMR cloud segmentation query model. (1) The EMR cloud hierarchical storage model is responsible for storing light EMR data items (such as patient basic information) on the local server, while encrypting heavy EMR data items (such as patient medical images) and storing them on the cloud, to ensure the confidentiality of electronic medical records on the cloud. (2) The EMR cloud segmentation query model performs EMR related query operations through the collaborative interaction between the local server and the cloud server, to ensure the accuracy and efficiency of each EMR query statement. Finally, both theoretical analysis and experimental evaluation demonstrate the effectiveness of the proposed solution for confidentiality management of electronic medical records on the cloud, i.e., which can ensure the confidentiality of electronic medical records on the untrusted cloud, without compromising the availability of an existing medical information management system.

RevDate: 2022-06-27

Puneet , Kumar R, M Gupta (2022)

Optical coherence tomography image based eye disease detection using deep convolutional neural network.

Health information science and systems, 10(1):13 pii:182.

Over the past few decades, health care industries and medical practitioners faced a lot of obstacles to diagnosing medical-related problems due to inadequate technology and availability of equipment. In the present era, computer science technologies such as IoT, Cloud Computing, Artificial Intelligence and its allied techniques, etc. play a crucial role in the identification of medical diseases, especially in the domain of Ophthalmology. Despite this, ophthalmologists have to perform the various disease diagnosis task manually which is time-consuming and the chances of error are also very high because some of the abnormalities of eye diseases possess the same symptoms. Furthermore, multiple autonomous systems also exist to categorize the diseases but their prediction rate does not accomplish state-of-art accuracy. In the proposed approach by implementing the concept of Attention, Transfer Learning with the Deep Convolution Neural Network, the model accomplished an accuracy of 97.79% and 95.6% on the training and testing data respectively. This autonomous model efficiently classifies the various oscular disorders namely Choroidal Neovascularization, Diabetic Macular Edema, Drusen from the Optical Coherence Tomography images. It may provide a realistic solution to the healthcare sector to bring down the ophthalmologist burden in the screening of Diabetic Retinopathy.

RevDate: 2022-06-27

Zhang H, M Li (2022)

Integrated Design and Development of Intelligent Scenic Area Rural Tourism Information Service Based on Hybrid Cloud.

Computational and mathematical methods in medicine, 2022:5316304.

Although the "Internet+" technologies (big data and cloud computing) have been implemented in many industries, each industry involved in rural tourism economic information services has its own database, and there are still vast economic information resources that have not been exploited. Z travel agency through rural tourism enterprise third-party information services and mobile context-awareness-based Z travel has achieved good economic and social benefits by deep value mining and innovative application of the existing data of the enterprise through the third-party information service of rural tourism enterprises and mobile context-aware travel recommendation service. It clearly demonstrates that, in order to maximise the benefits of economic data, rural tourist businesses should focus not only on the application of new technologies and methodologies but also on the core of demand and data-driven and thoroughly investigate the potential value of current data. This paper mainly analyzes the problems related to how rural tourism can be upgraded under the smart tourism platform, with the aim of improving the development of China's rural tourism industry with the help of an integrated smart tourism platform, and proposes a hybrid cloud-based integrated system of smart scenic rural tourism information services, which can meet the actual use needs of rural tourism, with good shared service effect and platform application performance, and promote the development of rural tourism and resource utilization rate.

RevDate: 2022-06-27

Hu Q (2022)

Optimization of Online Course Platform for Piano Preschool Education Based on Internet Cloud Computing System.

Computational intelligence and neuroscience, 2022:6525866.

This article focuses on introducing online piano teaching methods and has developed and implemented a preschool piano education online course platform. The system consists of four parts: backend, WeChat, client, and web page. Backend development uses PHP language and Laravel system framework, WeChat and web development both use JavaScript language and React framework, client development uses Objective-C language, and the system provides internal support for RESTful API, mainly for client, WeChat, and web. The client relies on the existing voice sensors of the research group to recognize and evaluate the performance of the students. The role of the client is to show the students their homework and demonstrate the activities performed by the teacher. The function of the WeChat terminal is to manage student work, user information, and user social interaction functions. The function of the web page is the score management and data analysis functions. Based on the knowledge of network course design, this article studies the design of piano preschool education platform and adds relevant components of the Internet cloud computer system and voice sensor to this platform, which provides great convenience for students to learn piano.

RevDate: 2022-06-27

Liu B, Zhang T, W Hu (2022)

Intelligent Traffic Flow Prediction and Analysis Based on Internet of Things and Big Data.

Computational intelligence and neuroscience, 2022:6420799.

Nowadays, the problem of road traffic safety cannot be ignored. Almost all major cities have problems such as poor traffic environment and low road efficiency. Large-scale and long-term traffic congestion occurs almost every day. Transportation has developed rapidly, and more and more advanced means of transportation have emerged. However, automobile is one of the main means of transportation for people to travel. In the world, there are serious traffic jams in almost all cities. The excessive traffic flow every day leads to the paralysis of the urban transportation system, which brings great inconvenience and impact to people's travel. Various countries have also actively taken corresponding measures, i.e., traffic diversion, number restriction, or expanding the scale of the road network, but these measures can bring little effect. Traditional intelligent traffic flow forecasting has some problems, such as low accuracy and delay. Aiming at this problem, this paper uses the model of the combination of Internet of Things and big data to apply and analyze its social benefits in intelligent traffic flow forecasting and analyzes its three-tier network architecture model, namely, perception layer, network layer, and application layer. Research and analyze the mode of combining cloud computing and edge computing. From the multiperspective linear discriminant analysis algorithm of the combination method of combining the same points and differences between data and data into multiple atomic services, intelligent traffic flow prediction based on the combination of Internet of Things and big data is performed. Through the monitoring and extraction of relevant traffic flow data, data analysis, processing and storage, and visual display, improve the accuracy and effectiveness and make it easier to improve the prediction accuracy of overall traffic flow. The traffic flow prediction of the system of Internet of Things and big data is given through the case experiment. The method proposed in this paper can be applied in intelligent transportation services and can predict the stability of transportation and traffic flow in real time so as to optimize traffic congestion, reduce manual intervention, and achieve the goal of intelligent traffic management.

RevDate: 2022-06-27

Sladky V, Nejedly P, Mivalt F, et al (2022)

Distributed brain co-processor for tracking spikes, seizures and behaviour during electrical brain stimulation.

Brain communications, 4(3):fcac115 pii:fcac115.

Early implantable epilepsy therapy devices provided open-loop electrical stimulation without brain sensing, computing, or an interface for synchronized behavioural inputs from patients. Recent epilepsy stimulation devices provide brain sensing but have not yet developed analytics for accurately tracking and quantifying behaviour and seizures. Here we describe a distributed brain co-processor providing an intuitive bi-directional interface between patient, implanted neural stimulation and sensing device, and local and distributed computing resources. Automated analysis of continuous streaming electrophysiology is synchronized with patient reports using a handheld device and integrated with distributed cloud computing resources for quantifying seizures, interictal epileptiform spikes and patient symptoms during therapeutic electrical brain stimulation. The classification algorithms for interictal epileptiform spikes and seizures were developed and parameterized using long-term ambulatory data from nine humans and eight canines with epilepsy, and then implemented prospectively in out-of-sample testing in two pet canines and four humans with drug-resistant epilepsy living in their natural environments. Accurate seizure diaries are needed as the primary clinical outcome measure of epilepsy therapy and to guide brain-stimulation optimization. The brain co-processor system described here enables tracking interictal epileptiform spikes, seizures and correlation with patient behavioural reports. In the future, correlation of spikes and seizures with behaviour will allow more detailed investigation of the clinical impact of spikes and seizures on patients.

RevDate: 2022-06-24

Shaukat Z, Farooq QUA, Tu S, et al (2022)

A state-of-the-art technique to perform cloud-based semantic segmentation using deep learning 3D U-Net architecture.

BMC bioinformatics, 23(1):251.

Glioma is the most aggressive and dangerous primary brain tumor with a survival time of less than 14 months. Segmentation of tumors is a necessary task in the image processing of the gliomas and is important for its timely diagnosis and starting a treatment. Using 3D U-net architecture to perform semantic segmentation on brain tumor dataset is at the core of deep learning. In this paper, we present a unique cloud-based 3D U-Net method to perform brain tumor segmentation using BRATS dataset. The system was effectively trained by using Adam optimization solver by utilizing multiple hyper parameters. We got an average dice score of 95% which makes our method the first cloud-based method to achieve maximum accuracy. The dice score is calculated by using Sørensen-Dice similarity coefficient. We also performed an extensive literature review of the brain tumor segmentation methods implemented in the last five years to get a state-of-the-art picture of well-known methodologies with a higher dice score. In comparison to the already implemented architectures, our method ranks on top in terms of accuracy in using a cloud-based 3D U-Net framework for glioma segmentation.

RevDate: 2022-06-24

Li W, Y Guo (2022)

A Secure Private Cloud Storage Platform for English Education Resources Based on IoT Technology.

Computational and mathematical methods in medicine, 2022:8453470.

The contemporary ubiquitous "cloud" network knowledge and information resources, as well as ecological pedagogy theory, have enlarged teaching research's perspective, widened teaching research's innovation area, and created practical options for English classroom reform. Cloud education relies on the Internet of Things, cloud computing, and big data to have a huge impact on the English learning process. The key to the integration of English education resources is the storage of huge amount of English teaching data. Applying the technology and methods of cloud storage to the construction of English education resource integration can effectively save the educational resources of schools, improve the utilization rate of English education resources, and thus enhance the teaching level of English subjects. In this work, we examine the existing state of English education resource building and teaching administration and offer a way for creating a "private cloud" of English education materials. We not only examined the architecture and three-layer modules of cloud computing in depth, but we also analyzed the "private cloud" technology and built the cloud structure of English teaching materials on this foundation. We hope that this paper can help and inspire us to solve the problems of uneven distribution, irregular management, and difficult sharing in the construction of English education resources.

RevDate: 2022-06-24

Ud Din MM, Alshammari N, Alanazi SA, et al (2022)

InteliRank: A Four-Pronged Agent for the Intelligent Ranking of Cloud Services Based on End-Users' Feedback.

Sensors (Basel, Switzerland), 22(12): pii:s22124627.

Cloud Computing (CC) provides a combination of technologies that allows the user to use the most resources in the least amount of time and with the least amount of money. CC semantics play a critical role in ranking heterogeneous data by using the properties of different cloud services and then achieving the optimal cloud service. Regardless of the efforts made to enable simple access to this CC innovation, in the presence of various organizations delivering comparative services at varying cost and execution levels, it is far more difficult to identify the ideal cloud service based on the user's requirements. In this research, we propose a Cloud-Services-Ranking Agent (CSRA) for analyzing cloud services using end-users' feedback, including Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Software as a Service (SaaS), based on ontology mapping and selecting the optimal service. The proposed CSRA possesses Machine-Learning (ML) techniques for ranking cloud services using parameters such as availability, security, reliability, and cost. Here, the Quality of Web Service (QWS) dataset is used, which has seven major cloud services categories, ranked from 0-6, to extract the required persuasive features through Sequential Minimal Optimization Regression (SMOreg). The classification outcomes through SMOreg are capable and demonstrate a general accuracy of around 98.71% in identifying optimum cloud services through the identified parameters. The main advantage of SMOreg is that the amount of memory required for SMO is linear. The findings show that our improved model in terms of precision outperforms prevailing techniques such as Multilayer Perceptron (MLP) and Linear Regression (LR).

RevDate: 2022-06-24

Liu X, Jin J, F Dong (2022)

Edge-Computing-Based Intelligent IoT: Architectures, Algorithms and Applications.

Sensors (Basel, Switzerland), 22(12): pii:s22124464.

With the rapid growth of the Internet of Things (IoT), 5G networks and beyond, the computing paradigm for intelligent IoT systems is shifting from conventional centralized-cloud computing to distributed edge computing [...].

RevDate: 2022-06-24

Dezfouli B, Y Liu (2022)

Editorial: Special Issue "Edge and Fog Computing for Internet of Things Systems".

Sensors (Basel, Switzerland), 22(12): pii:s22124387.

Employing edge and fog computing for building IoT systems is essential, especially because of the massive number of data generated by sensing devices, the delay requirements of IoT applications, the high burden of data processing on cloud platforms, and the need to take immediate actions against security threats.&nbsp.

RevDate: 2022-06-24

Lakhan A, Morten Groenli T, Majumdar A, et al (2022)

Potent Blockchain-Enabled Socket RPC Internet of Healthcare Things (IoHT) Framework for Medical Enterprises.

Sensors (Basel, Switzerland), 22(12): pii:s22124346.

Present-day intelligent healthcare applications offer digital healthcare services to users in a distributed manner. The Internet of Healthcare Things (IoHT) is the mechanism of the Internet of Things (IoT) found in different healthcare applications, with devices that are attached to external fog cloud networks. Using different mobile applications connecting to cloud computing, the applications of the IoHT are remote healthcare monitoring systems, high blood pressure monitoring, online medical counseling, and others. These applications are designed based on a client-server architecture based on various standards such as the common object request broker (CORBA), a service-oriented architecture (SOA), remote method invocation (RMI), and others. However, these applications do not directly support the many healthcare nodes and blockchain technology in the current standard. Thus, this study devises a potent blockchain-enabled socket RPC IoHT framework for medical enterprises (e.g., healthcare applications). The goal is to minimize service costs, blockchain security costs, and data storage costs in distributed mobile cloud networks. Simulation results show that the proposed blockchain-enabled socket RPC minimized the service cost by 40%, the blockchain cost by 49%, and the storage cost by 23% for healthcare applications.

RevDate: 2022-06-24

Liu H, Zhang R, Liu Y, et al (2022)

Unveiling Evolutionary Path of Nanogenerator Technology: A Novel Method Based on Sentence-BERT.

Nanomaterials (Basel, Switzerland), 12(12): pii:nano12122018.

In recent years, nanogenerator technology has developed rapidly with the rise of cloud computing, artificial intelligence, and other fields. Therefore, the quick identification of the evolutionary path of nanogenerator technology from a large amount of data attracts much attention. It is of great significance in grasping technical trends and analyzing technical areas of interest. However, there are some limitations in previous studies. On the one hand, previous research on technological evolution has generally utilized bibliometrics, patent analysis, and citations between patents and papers, ignoring the rich semantic information contained therein; on the other hand, its evolution analysis perspective is single, and it is difficult to obtain accurate results. Therefore, this paper proposes a new framework based on the methods of Sentence-BERT and phrase mining, using multi-source data, such as papers and patents, to unveil the evolutionary path of nanogenerator technology. Firstly, using text vectorization, clustering algorithms, and the phrase mining method, current technical themes of significant interest to researchers can be obtained. Next, this paper correlates the multi-source fusion themes through semantic similarity calculation and demonstrates the multi-dimensional technology evolutionary path by using the "theme river map". Finally, this paper presents an evolution analysis from the perspective of frontier research and technology research, so as to discover the development focus of nanogenerators and predict the future application prospects of nanogenerator technology.

RevDate: 2022-06-24

Ashraf E, Areed NFF, Salem H, et al (2022)

FIDChain: Federated Intrusion Detection System for Blockchain-Enabled IoT Healthcare Applications.

Healthcare (Basel, Switzerland), 10(6): pii:healthcare10061110.

Recently, there has been considerable growth in the internet of things (IoT)-based healthcare applications; however, they suffer from a lack of intrusion detection systems (IDS). Leveraging recent technologies, such as machine learning (ML), edge computing, and blockchain, can provide suitable and strong security solutions for preserving the privacy of medical data. In this paper, FIDChain IDS is proposed using lightweight artificial neural networks (ANN) in a federated learning (FL) way to ensure healthcare data privacy preservation with the advances of blockchain technology that provides a distributed ledger for aggregating the local weights and then broadcasting the updated global weights after averaging, which prevents poisoning attacks and provides full transparency and immutability over the distributed system with negligible overhead. Applying the detection model at the edge protects the cloud if an attack happens, as it blocks the data from its gateway with smaller detection time and lesser computing and processing capacity as FL deals with smaller sets of data. The ANN and eXtreme Gradient Boosting (XGBoost) models were evaluated using the BoT-IoT dataset. The results show that ANN models have higher accuracy and better performance with the heterogeneity of data in IoT devices, such as intensive care unit (ICU) in healthcare systems. Testing the FIDChain with different datasets (CSE-CIC-IDS2018, Bot Net IoT, and KDD Cup 99) reveals that the BoT-IoT dataset has the most stable and accurate results for testing IoT applications, such as those used in healthcare systems.

RevDate: 2022-06-23

Aldahwan NS, MS Ramzan (2022)

The Descriptive Data Analysis for the Adoption of Community Cloud in Saudi HEI-Based Factor Adoption.

BioMed research international, 2022:7765204.

Due to its increased reliability, adaptability, scalability, availability, and processing capacity, cloud computing is rapidly becoming a popular trend around the world. One of the major issues with cloud computing is making informed decision about adoption of community cloud (CC) computing (ACCC). To date, there are various technology acceptance theories and models to validate perspective of ACCC at both organizational and individual levels. However, no experimental studies have been carried out to provide a comprehensive assessment of the factors of ACCC, specifically in the area of the Saudi Higher Education (HEI) Institution. Thus, this research was aimed at exploring the factors of ACCC and the relationship to the experiences of the employees. The analysis of the employee context was driven by the success factors of technological, organizational, environmental, human, security, and advantage contexts on community cloud computing adoption in HEI. The data collection was a questionnaire-based survey based on 106 responses. We present findings based on descriptive analysis in identifying the significant component that contributed to the effective implementation of ACCC. Security concerns are a significant influencing element in the adoption of community cloud technology.

RevDate: 2022-06-22

Cotur Y, Olenik S, Asfour T, et al (2022)

Bioinspired Stretchable Transducer for Wearable Continuous Monitoring of Respiratory Patterns in Humans and Animals.

Advanced materials (Deerfield Beach, Fla.) [Epub ahead of print].

We report a bio-inspired continuous wearable respiration sensor modeled after the lateral line system of fish which is used for detecting mechanical disturbances in the water. Despite the clinical importance of monitoring respiratory activity in humans and animals, continuous measurements of breathing patterns and rates are rarely performed in or outside of clinics. This is largely because conventional sensors are too inconvenient or expensive for wearable sensing for most individuals and animals. The bio-inspired air-silicone composite transducer is placed on the chest and measures respiratory activity by continuously measuring the force applied to an air channel embedded inside a silicone-based elastomeric material. The force applied on the surface of the transducer during breathing changes the air pressure inside the channel, which is measured using a commercial pressure sensor and mixed-signal wireless electronics. We extensively characterized the transducer produced in this work and tested it with humans, dogs, and laboratory rats. The bio-inspired air-silicone composite transducer may enable the early detection of a range of disorders that result in altered patterns of respiration. The technology reported can also be combined with artificial intelligence and cloud computing to algorithmically detect illness in humans and animals remotely, reducing unnecessary visits to clinics. This article is protected by copyright. All rights reserved.

RevDate: 2022-06-22

Pillen D, M Eckard (2022)

The impact of the shift to cloud computing on digital recordkeeping practices at the University of Michigan Bentley historical library.

Archival science pii:9395 [Epub ahead of print].

Cloud-based productivity, collaboration, and storage tools offer increased opportunities for collaboration and potential cost-savings over locally hosted solutions and have seen widespread adoption throughout industry, government, and academia over the last decade. While these tools benefit organizations, IT departments, and day-to-day-users, they present unique challenges for records managers and archivists. As a review of the relevant literature demonstrates, issues surrounding cloud computing are not limited to the technology-although the implementation and technological issues are numerous-but also include organization management, human behavior, regulation, and records management, making the process of archiving digital information in this day and age all the more difficult. This paper explores some of the consequences of this shift and its effect on digital recordkeeping at the Bentley Historical Library, whose mission is to "collect the materials for the University of Michigan." After providing context for this problem by discussing relevant literature, two practicing archivists will explore the impact of the move toward cloud computing as well as various productivity software and collaboration tools in use at U-M throughout the various stages of a standard lifecycle model for managing records.

RevDate: 2022-06-22

Mahanty C, Kumar R, SGK Patro (2022)

Internet of Medical Things-Based COVID-19 Detection in CT Images Fused with Fuzzy Ensemble and Transfer Learning Models.

New generation computing pii:176 [Epub ahead of print].

One of the most difficult research areas in today's healthcare industry to combat the coronavirus pandemic is accurate COVID-19 detection. Because of its low infection miss rate and high sensitivity, chest computed tomography (CT) imaging has been recommended as a viable technique for COVID-19 diagnosis in a number of recent clinical investigations. This article presents an Internet of Medical Things (IoMT)-based platform for improving and speeding up COVID-19 identification. Clinical devices are connected to network resources in the suggested IoMT platform using cloud computing. The method enables patients and healthcare experts to work together in real time to diagnose and treat COVID-19, potentially saving time and effort for both patients and physicians. In this paper, we introduce a technique for classifying chest CT scan images into COVID, pneumonia, and normal classes that use a Sugeno fuzzy integral ensemble across three transfer learning models, namely SqueezeNet, DenseNet-201, and MobileNetV2. The suggested fuzzy ensemble techniques outperform each individual transfer learning methodology as well as trainable ensemble strategies in terms of accuracy. The suggested MobileNetV2 fused with Sugeno fuzzy integral ensemble model has a 99.15% accuracy rate. In the present research, this framework was utilized to identify COVID-19, but it may also be implemented and used for medical imaging analyses of other disorders.

RevDate: 2022-06-22

Gupta A, A Singh (2022)

An Intelligent Healthcare Cyber Physical Framework for Encephalitis Diagnosis Based on Information Fusion and Soft-Computing Techniques.

New generation computing pii:175 [Epub ahead of print].

Viral encephalitis is a contagious disease that causes life insecurity and is considered one of the major health concerns worldwide. It causes inflammation of the brain and, if left untreated, can have persistent effects on the central nervous system. Conspicuously, this paper proposes an intelligent cyber-physical healthcare framework based on the IoT-fog-cloud collaborative network, employing soft-computing technology and information fusion. The proposed framework uses IoT-based sensors, electronic medical records, and user devices for data acquisition. The fog layer, composed of numerous nodes, processes the most specific encephalitis symptom-related data to classify possible encephalitis cases in real time to issue an alarm when a significant health emergency occurs. Furthermore, the cloud layer involves a multi-step data processing scheme for in-depth data analysis. First, data obtained across multiple data generation sources are fused to obtain a more consistent, accurate, and reliable feature set. Data preprocessing and feature selection techniques are applied to the fused data for dimensionality reduction over the cloud computing platform. An adaptive neuro-fuzzy inference system is applied in the cloud to determine the risk of a disease and classify the results into one of four categories: no risk, probable risk, low risk, and acute risk. Moreover, the alerts are generated and sent to the stakeholders based on the risk factor. Finally, the computed results are stored in the cloud database for future use. For validation purposes, various experiments are performed using real-time datasets. The analysis results performed on the fog and cloud layers show higher performance than the existing models. Future research will focus on the resource allocation in the cloud layer while considering various security aspects to improve the utility of the proposed work.

RevDate: 2022-06-21

Yue YF, Chen GP, Wang L, et al (2022)

[Dynamic monitoring and evaluation of ecological environment quality in Zhouqu County, Gansu, China based on Google Earth Engine cloud platform].

Ying yong sheng tai xue bao = The journal of applied ecology, 33(6):1608-1614.

Zhouqu County is located in the transition region from the Qinghai-Tibet Plateau to the Qinba Mountains, and is an important part of the ecological barrier in the upper stream of the Yangtze River. In this study, we used the Google Earth Engine cloud processing platform to perform inter-image optimal reconstruction of Landsat surface reflectance images from 1998-2019. We calculated four indicators of regional wet, green, dry, and hot. The component indicators were coupled by principal component analysis to construct remote sensing ecological index (RSEI) and to analyze the spatial and temporal variations of ecological environment quality in Zhouqu County. The results showed that the contribution of the four component indicators to the eigenvalues of the coupled RSEI were above 70%, with even distribution of the loadings, indicating that the RSEI integrated most of the features of the component indicators. From 1998 to 2019, the RSEI of Zhouqu County ranged from 0.55 to 0.63, showing an increasing trend with a growth rate of 0.04·(10 a)-1, and the area of better grade increased by 425.56 km2. The area with altitude ≤2200 m was dominated by medium and lower ecological environment quality grade, while the area of better ecological environment quality grade area increased by 16.5%. The ecological and environmental quality of the region from 2200 to 3300 m was dominated by good grades, increasing to 71.3% in 2019, with the area of medium and below ecological and environmental quality grades decreasing year by year. The area with altitude ≥3300 m was dominated by the medium ecological quality grade. The medium and below ecological quality grades showed a "U" shape trend during the study period. The trend of ecological environment quality in Zhouqu County was becoming better, but with fluctuations. It is necessary to continuously strengthen the protection and management of ecological environment in order to guarantee the continuous improvement of ecological environment quality.

RevDate: 2022-06-21

Pradhan C, Padhee SK, Bharti R, et al (2022)

A process-based recovery indicator for anthropogenically disturbed river system.

Scientific reports, 12(1):10390.

The present paper utilizes entropy theory and Google earth engine cloud computing technique to investigate system state and river recovery potential in two large sub-basins of the Mahanadi River, India. The cross-sectional intensity entropy (CIE) is computed for the post-monsoon season (October-March) along the selected reaches. Further, a normalized river recovery indicator (NRRI) is formulated to assess the temporal changes in river health. Finally, NRRI is related to a process-based variable-LFE (low flow exceedance) to comprehend the dominating system dynamics and evolutionary adjustments. The results highlight the existence of both threshold-modulated and filter-dominated systems based on CIE and NRRI variabilities. In addition, the gradual decline in CIE and subsequent stabilization of vegetated landforms can develop an 'event-driven' state, where floods exceeding the low-flow channel possess a direct impact on the river recovery trajectory. Finally, this study emphasizes the presence of instream vegetation as an additional degree of freedom, which further controls the hierarchy of energy dissipation and morphological continuum in the macrochannel settings.

RevDate: 2022-06-20

Bamasag O, Alsaeedi A, Munshi A, et al (2022)

Real-time DDoS flood attack monitoring and detection (RT-AMD) model for cloud computing.

PeerJ. Computer science, 7:e814 pii:cs-814.

In recent years, the advent of cloud computing has transformed the field of computing and information technology. It has been enabling customers to rent virtual resources and take advantage of various on-demand services with the lowest costs. Despite the advantages of cloud computing, it faces several threats; an example is a distributed denial of service (DDoS) attack, which is considered among the most serious. This article presents real-time monitoring and detection of DDoS attacks on the cloud using a machine learning approach. Naïve Bayes, K-nearest neighbor, decision tree, and random forest machine learning classifiers have been selected to build a predictive model named "Real-Time DDoS flood Attack Monitoring and Detection RT-AMD." The DDoS-2020 dataset was constructed with 70,020 records to evaluate RT-AMD's accuracy. The DDoS-2020 contains three protocols for network/transport-level, which are TCP, DNS, and ICMP. This article evaluates the proposed model by comparing its accuracy with related works. Our model has shown improvement in the results and reached real-time attack detection using incremental learning. The model achieved 99.38% accuracy for the random forest in real-time on the cloud environment and 99.39% on local testing. The RT-AMD was evaluated on the NSL-KDD dataset as well, in which it achieved 99.30% accuracy in real-time in a cloud environment.

RevDate: 2022-06-20

Osmanoglu M, Demir S, B Tugrul (2022)

Privacy-preserving k-NN interpolation over two encrypted databases.

PeerJ. Computer science, 8:e965 pii:cs-965.

Cloud computing enables users to outsource their databases and the computing functionalities to a cloud service provider to avoid the cost of maintaining a private storage and computational requirements. It also provides universal access to data, applications, and services without location dependency. While cloud computing provides many benefits, it possesses a number of security and privacy concerns. Outsourcing data to a cloud service provider in encrypted form may help to overcome these concerns. However, dealing with the encrypted data makes it difficult for the cloud service providers to perform some operations over the data that will especially be required in query processing tasks. Among the techniques employed in query processing task, the k-nearest neighbor method draws attention due to its simplicity and efficiency, particularly on massive data sets. A number of k-nearest neighbor algorithms for query processing task on a single encrypted database have been proposed. However, the performance of k-nearest neighbor algorithms on a single database may create accuracy and reliability problems. It is a fact that collaboration among different cloud service providers yields more accurate and more reliable results in query processing. By considering this fact, we focus on the k-nearest neighbor (k-NN) problem over two encrypted databases. We introduce a secure two-party k-NN interpolation protocol that enables a query owner to extract the interpolation of the k-nearest neighbors of a query point from two different databases outsourced to two different cloud service providers. We also show that our protocol protects the confidentiality of the data and the query point, and hides data access patterns. Furthermore, we conducted a number of experiment to demonstrate the efficiency of our protocol. The results show that the running time of our protocol is linearly dependent on both the number of nearest neighbours and data size.

RevDate: 2022-06-21
CmpDate: 2022-06-21

Yuan G, Xie F, H Tan (2022)

Construction of Economic Security Early Warning System Based on Cloud Computing and Data Mining.

Computational intelligence and neuroscience, 2022:2080840.

Economic security is a core theoretical issue in economics. In modern economic conditions, the ups and downs caused by economic instability in any economic system will affect the stability of the financial market, bring huge losses to the economy, and affect the development of the whole national economy. Therefore, research on the regularity of economic security and economic fluctuations is one of the important contents to ensure economic stability and scientific development. Accurate monitoring and forecasting of economic security are an indispensable link in economic system regulation, and it is also an important reference factor for any economic organization to make decisions. This article focuses on the construction of an economic security early warning system as the main research content. It integrates cloud computing and data mining technologies and is supported by CNN-SVM algorithm and designs an early warning model that can adaptively evaluate and warn the economic security state. Experiments show that when the CNN network in the model uses ReLU activation function and SVM uses RBF function, the prediction accuracy can reach 0.98, and the prediction effect is the best. The data set is verified, and the output Q province's 2018 economic security early warning comprehensive index is 0.893. The 2019 economic security early warning index is 0.829, which is consistent with the actual situation.

RevDate: 2022-06-21
CmpDate: 2022-06-21

Yin X, J He (2022)

Construction of Tourism E-Commerce Platform Based on Artificial Intelligence Algorithm.

Computational intelligence and neuroscience, 2022:5558011.

In the late twentieth century, with the rapid development of the Internet, e-commerce has emerged rapidly, which has changed the way people travel around the world. The greatest advantages of e-commerce are the flow of information and data and the importance of traveling freely to experience the mind and body in different fields. Tourism is an important part of the development of e-commerce, but the development of e-commerce tourism lags behind. To solve the current situation of the backward development of tourism e-commerce, this article studies the construction of a tourism e-commerce platform based on an artificial intelligence algorithm. By introducing modern information technology, based on a cloud computing platform, big data analysis, K-means, and other key technologies, this article solves the current situation of the development of an e-commerce platform. It also analyzes the construction methods of traditional cloud platforms and modern cloud platforms through comparative analysis and solves the construction methods suitable for artificial intelligence tourism. At the same time, combined with the actual situation of tourism, this article selects the appropriate networking method based on the analysis of the advantages and disadvantages of wired and wireless coverage methods and economics to complete the project design. Its purpose is to ensure that the work meets the specific construction needs and build an artificial intelligence-based smart tourism big data analysis model. It promotes the development of tourism e-commerce industry. It saves costs and improves efficiency for travel service providers. Then, according to the actual situation of tourism, it conducts demand analysis from the perspectives of tourists, scenic spots, service providers, tourism administrative agencies, etc. Experiments show that, through the practical application of the artificial intelligence tourism mobile e-commerce platform in this article, it can be seen that the artificial intelligence tourism mobile e-commerce platform designed in this article can meet the needs of customers for shopping-related tourism commodities. Tourists of attractions have increased by 3.54%, and the economy of tourist destinations has increased by 4.2%.

RevDate: 2022-06-20

Cheng W, Lian W, J Tian (2022)

Building the hospital intelligent twins for all-scenario intelligence health care.

Digital health, 8:20552076221107894 pii:10.1177_20552076221107894.

The COVID-19 pandemic has accelerated a long-term trend of smart hospital development. However, there is no consistent conceptualization of what a smart hospital entails. Few hospitals have genuinely reached being "smart," primarily failing to bring systems together and consider implications from all perspectives. Hospital Intelligent Twins, a new technology integration powered by IoT, AI, cloud computing, and 5G application to create all-scenario intelligence for health care and hospital management. This communication presented a smart hospital for all-scenario intelligence by creating the hospital Intelligent Twins. Intelligent Twins is widely involved in medical activities. However, solving the medical ethics, protecting patient privacy, and reducing security risks involved are significant challenges for all-scenario intelligence applications. This exploration of creating hospital Intelligent Twins that can be a worthwhile endeavor to assess how to inform evidence-based decision-making better and enhance patient satisfaction and outcomes.

RevDate: 2022-06-17

Chen X, Xue Y, Sun Y, et al (2022)

Neuromorphic Photonic Memory Devices Using Ultrafast, Non-volatile Phase-change Materials.

Advanced materials (Deerfield Beach, Fla.) [Epub ahead of print].

The search for ultra-fast photonic memory devices is inspired by the ever-increasing number of cloud computing, supercomputing, and artificial intelligence applications, together with the unique advantages of signal processing in the optical domain such as high speed, large bandwidth, and low energy consumption. By embracing silicon photonics with chalcogenide phase-change materials (PCMs), non-volatile integrated photonic memory has been developed with promising potential in photonic integrated circuits and nanophotonic applications. While conventional PCMs suffer from slow crystallization speed, scandium-doped antimony telluride (SST) has recently been developed for ultrafast phase-change random-access memory applications. We demonstrate an ultrafast non-volatile photonic memory based on an SST thin film with a 2-ns write/erase speed-which is the fastest write/erase speed ever reported in integrated phase-change photonic devices. SST-based photonic memories exhibit multi-level capabilities and good stability at room temperature. By mapping the memory level to the biological synapse weight, an artificial neural network based on photonic memory devices is successfully established for image classification. Additionally, we demonstrate a reflective nanodisplay application using SST with optoelectronic modulation capabilities. Both the optical and electrical changes in SST during the phase transition and the fast-switching speed demonstrate their potential for use in photonic computing, neuromorphic computing, nanophotonics, and optoelectronic applications. This article is protected by copyright. All rights reserved.

RevDate: 2022-06-20
CmpDate: 2022-06-20

Hassan J, Shehzad D, Habib U, et al (2022)

The Rise of Cloud Computing: Data Protection, Privacy, and Open Research Challenges-A Systematic Literature Review (SLR).

Computational intelligence and neuroscience, 2022:8303504.

Cloud computing is a long-standing dream of computing as a utility, where users can store their data remotely in the cloud to enjoy on-demand services and high-quality applications from a shared pool of configurable computing resources. Thus, the privacy and security of data are of utmost importance to all of its users regardless of the nature of the data being stored. In cloud computing environments, it is especially critical because data is stored in various locations, even around the world, and users do not have any physical access to their sensitive data. Therefore, we need certain data protection techniques to protect the sensitive data that is outsourced over the cloud. In this paper, we conduct a systematic literature review (SLR) to illustrate all the data protection techniques that protect sensitive data outsourced over cloud storage. Therefore, the main objective of this research is to synthesize, classify, and identify important studies in the field of study. Accordingly, an evidence-based approach is used in this study. Preliminary results are based on answers to four research questions. Out of 493 research articles, 52 studies were selected. 52 papers use different data protection techniques, which can be divided into two main categories, namely noncryptographic techniques and cryptographic techniques. Noncryptographic techniques consist of data splitting, data anonymization, and steganographic techniques, whereas cryptographic techniques consist of encryption, searchable encryption, homomorphic encryption, and signcryption. In this work, we compare all of these techniques in terms of data protection accuracy, overhead, and operations on masked data. Finally, we discuss the future research challenges facing the implementation of these techniques.

RevDate: 2022-06-20
CmpDate: 2022-06-20

Chen M (2022)

Integration and Optimization of British and American Literature Information Resources in the Distributed Cloud Computing Environment.

Computational intelligence and neuroscience, 2022:4318962.

One of the most effective approaches to improve resource usage efficiency and degree of resource collecting is to integrate resources. Many studies on the integration of information resources are also available. The search engines are the most well-known. At the same time, this article intends to optimize the integration of British and American literature information resources by employing distributed cloud computing, based on the needs of British and American literature. This research develops a model for the dispersed nature of cloud computing. It optimizes the method by fitting the mathematical model of transmission cost and latency. This article analyzes the weaknesses of the current British and American literature information resource integration and optimizes them for the integration of British and American literature resources. The Random algorithm has the longest delay, according to the results of this paper's experiments (maximum user weighted distance). The algorithms NPA-PDP and BWF have longer delays than the algorithm Opt. The percentage decline varies between 0.17 percent and 1.11 percent for different algorithms. It demonstrates that the algorithm presented in this work can be used to integrate and maximize information resources from English and American literature.

RevDate: 2022-06-17
CmpDate: 2022-06-17

Chen Y, W Zhou (2022)

Application of Network Information Technology in Physical Education and Training System under the Background of Big Data.

Computational intelligence and neuroscience, 2022:3081523.

During the last two decades, rapid development in the network technology has been observed, particularly hardware, and the development of software technology has accelerated, resulting in the launch of a variety of novel products with a wide range of applications. Traditional sports training systems, on the other hand, have a single function and a complex operation that cannot be fully implemented in colleges and universities, causing China's sports training to stagnate for a long time. The goal of physical education and training is to teach a specific action to attain its maximum potential in a variety of ways. As a result, we should use the system to collect scientifically sound and trustworthy data to aid relevant staff in completing their training tasks. Therefore, in the context of big data, network information technology has become the main way to improve the physical education system. By applying cloud computing technology, machine vision technology, and 64-bit machine technology to the physical education training system, extract the video data of the physical education system, design the system video teaching process, and complete the construction of three-dimensional human model, so as to analyze the training situation of the trainers. In this paper, 30 basketball majors in a university are selected as the professional group and 30 computer majors as the control group. The average reaction time, scores, and expert scores of the two groups are analyzed. The results show that the test of the professional group is significantly higher than that of the amateur group. At the same time, the feedback results of students using physical education and training system and normal physical education teaching and training are compared and analyzed. One week later, the students trained by the physical education system have improved their thinking ability, movement accuracy, and judgment ability, indicating that the application of the physical education training system to the actual effect is ideal.

RevDate: 2022-06-14

Cheah CG, Chia WY, Lai SF, et al (2022)

Innovation designs of industry 4.0 based solid waste management: Machinery and digital circular economy.

Environmental research pii:S0013-9351(22)00946-X [Epub ahead of print].

The Industrial Revolution 4.0 (IR 4.0) holds the opportunity to improve the efficiency of managing solid waste through digital and machinery applications, effectively eliminating, recovering, and repurposing waste. This research aims to discover and review the potential of current technologies encompassing innovative Industry 4.0 designs for solid waste management. Machinery and processes emphasizing on circular economy were summarized and evaluated. The application of IR 4.0 technologies shows promising opportunities in improving the management and efficiency in view of solid waste. Machine learning (ML), artificial intelligence (AI), and image recognition can be used to automate the segregation of waste, reducing the risk of exposing labour workers to harmful waste. Radio Frequency Identification (RFID) and wireless communications enable the traceability in materials to better understand the opportunities in circular economy. Additionally, the interconnectivity of systems and automatic transfer of data enable the creation of more complex system that houses a larger solution space that was previously not possible such as centralised cloud computing to reduce the cost by eliminating the need for individual computing systems. Through this comprehensive review-based work, innovative Industry 4.0 components of machinery and processes involving waste management which focuses on circular economy are identified with the critical ones evaluated briefly. It was found that the current research and work done is based on applying Industry 4.0 technologies on individual waste management systems, which lacks the coherency needed to capitalise on technologies such as cloud computing, interconnectivity, big data, etc on a larger scale. Therefore, a real world comprehensive end-to-end integration aimed to optimize every process within the solid waste management chain should be explored.

RevDate: 2022-06-13

Zhao Y, D Du (2022)

Research Orientation and Development of Social Psychology's Concept of Justice in the Era of Cloud Computing.

Frontiers in psychology, 13:902780.

With the maturity and rapid expansion of social psychology, great progress has been made in the integration of social psychology with other disciplines. From the very beginning, social psychology is destined to have a diversified and multidisciplinary research orientation and disciplinary nature, which also makes it difficult for social psychology to be defined in a single disciplinary field and a single research method. With the rapid development of the Internet, the emergence of cloud computing technology not only facilitates the orientation of psychological research, but also promotes the emergence and development of some new psychological disciplines. Therefore, the purpose of this paper is to study the orientation of social psychology and its current development in the context of cloud computing era. This paper collects, organizes, and integrates the research data of college students' view of justice from the perspective of social psychology through cloud computing technology, and uses empirical research methods to conduct in-depth research on people's view of justice in social psychology. This paper collects the data reports of college students on social justice issues through cloud computing technology to make the results more accurate. The experimental results show that nearly 70% of college students pay more attention to social justice issues. This data clearly reflects the optimistic trend of people's attention to justice issues in social psychology.

RevDate: 2022-06-10

Chu Z, Guo J, J Guo (2022)

Up-conversion Luminescence System for Quantitative Detection of IL-6.

IEEE transactions on nanobioscience, PP: [Epub ahead of print].

Interleukin-6 (IL-6) is a very important cytokine and an early predictor of survival in febrile patients (eg, patients with COVID-19). With the outbreak of the COVID-19 in the world, the significance of medical detection of interleukin 6 has gradually become prominent. A method to point-of-care(POCT) diagnosis and monitoring of IL-6 levels in patients is urgently needed. In this work, an up-conversion luminescence system (ULS) based on upconverting nanoparticles (UCNs) for quantitative detection of IL-6 was designed. The ULS consists of Micro Controller Units (MCU), transmission device, laser, image acquisition module, Bluetooth module, etc. Through hardware system acquisition and image software algorithm processing, we obtain a limit of detection (LOD) of IL-6 at 1 ng/mL, and the quantitative range is from 1 to 200 ng/mL. The system is handheld and has great detection accuracy. The detection time is 10 minutes. In addition, the system can access mobile device terminals (smartphones, personal computers, etc.) or 5G cloud servers via Bluetooth and WIFI. Patients and family members can view medical data through mobile terminals, and the data stored in the 5G cloud server can be used for edge computing and big data analysis. It is suitable for the early diagnosis of infectious diseases such as COVID-19 and has good application prospects.

RevDate: 2022-06-10

Ito H, Nakamura Y, Takanari K, et al (2022)

"Development of a Novel Scar Screening System with Machine Learning".

Plastic and reconstructive surgery pii:00006534-990000000-00899 [Epub ahead of print].

BACKGROUND: Hypertrophic scars and keloids tend to cause serious functional and cosmetic impediments to patients. However, as these scars are not life threatening, many patients do not seek proper treatment. Thus, educating physicians and patients regarding these scars is important. The authors aimed to develop an algorithm for scar screening system and compare accuracy of the system with that of physicians. This algorithm is designed to involve healthcare providers and patients.

METHODS: Digital images were obtained from Google Images, open access repositories, and patients in our hospital. After preprocessing, 3,768 images were uploaded to Google Cloud AutoML Vision platform and labeled with one of the four diagnoses: immature, mature, and hypertrophic scars and keloid. A consensus label for each image was compared with the label provided by physicians.

RESULTS: For all diagnoses, the average precision (positive predictive value) of the algorithm was 80.7%, the average recall (sensitivity) was 71%, and the area under the curve (AUC) was 0.846. The algorithm afforded 77 correct diagnoses with an accuracy of 77%. Conversely, the average physician accuracy was 68.7%. The Cohen's kappa coefficient of the algorithm was 0.69, whereas that of the physicians were 0.59.

CONCLUSIONS: We developed a computer vision algorithm that can diagnose four scar types using automated machine learning. Future iterations of this algorithm, with more comprehensive accuracy, can be embedded in telehealth and digital imaging platforms used by patients and primary doctors. The scar screening system with machine learning may be a valuable support tool for physicians and patients.

RevDate: 2022-06-10

Hanzelik PP, Kummer A, J Abonyi (2022)

Edge-Computing and Machine-Learning-Based Framework for Software Sensor Development.

Sensors (Basel, Switzerland), 22(11): pii:s22114268.

The present research presents a framework that supports the development and operation of machine-learning (ML) algorithms to develop, maintain and manage the whole lifecycle of modeling software sensors related to complex chemical processes. Our motivation is to take advantage of ML and edge computing and offer innovative solutions to the chemical industry for difficult-to-measure laboratory variables. The purpose of software sensor models is to continuously forecast the quality of products to achieve effective quality control, maintain the stable production condition of plants, and support efficient, environmentally friendly, and harmless laboratory work. As a result of the literature review, quite a few ML models have been developed in recent years that support the quality assurance of different types of materials. However, the problems of continuous operation, maintenance and version control of these models have not yet been solved. The method uses ML algorithms and takes advantage of cloud services in an enterprise environment. Industrial 4.0 devices such as the Internet of Things (IoT), edge computing, cloud computing, ML, and artificial intelligence (AI) are core techniques. The article outlines an information system structure and the related methodology based on data from a quality-assurance laboratory. During the development, we encountered several challenges resulting from the continuous development of ML models and the tuning of their parameters. The article discusses the development, version control, validation, lifecycle, and maintenance of ML models and a case study. The developed framework can continuously monitor the performance of the models and increase the amount of data that make up the models. As a result, the most accurate, data-driven and up-to-date models are always available to quality-assurance engineers with this solution.

RevDate: 2022-06-10

Lin HY, Tsai TT, Ting PY, et al (2022)

An Improved ID-Based Data Storage Scheme for Fog-Enabled IoT Environments.

Sensors (Basel, Switzerland), 22(11): pii:s22114223.

In a fog-enabled IoT environment, a fog node is regarded as the proxy between end users and cloud servers to reduce the latency of data transmission, so as to fulfill the requirement of more real-time applications. A data storage scheme utilizing fog computing architecture allows a user to share cloud data with other users via the assistance of fog nodes. In particular, a fog node obtaining a re-encryption key of the data owner is able to convert a cloud ciphertext into the one which is decryptable by another designated user. In such a scheme, a proxy should not learn any information about the plaintext during the transmission and re-encryption processes. In 2020, an ID-based data storage scheme utilizing anonymous key generation in fog computing was proposed by some researchers. Although their protocol is provably secure in a proof model of random oracles, we will point out that there are some security flaws inherited in their protocol. On the basis of their work, we further present an improved variant, which not only eliminates their security weaknesses, but also preserves the functionalities of anonymous key generation and user revocation mechanism. Additionally, under the Decisional Bilinear Diffie-Hellman (DBDH) assumption, we demonstrate that our enhanced construction is also provably secure in the security notion of IND-PrID-CPA.

RevDate: 2022-06-10

Bhatia S, Alsuwailam RI, Roy DG, et al (2022)

Improved Multimedia Object Processing for the Internet of Vehicles.

Sensors (Basel, Switzerland), 22(11): pii:s22114133.

The combination of edge computing and deep learning helps make intelligent edge devices that can make several conditional decisions using comparatively secured and fast machine learning algorithms. An automated car that acts as the data-source node of an intelligent Internet of vehicles or IoV system is one of these examples. Our motivation is to obtain more accurate and rapid object detection using the intelligent cameras of a smart car. The competent supervision camera of the smart automobile model utilizes multimedia data for real-time automation in real-time threat detection. The corresponding comprehensive network combines cooperative multimedia data processing, Internet of Things (IoT) fact handling, validation, computation, precise detection, and decision making. These actions confront real-time delays during data offloading to the cloud and synchronizing with the other nodes. The proposed model follows a cooperative machine learning technique, distributes the computational load by slicing real-time object data among analogous intelligent Internet of Things nodes, and parallel vision processing between connective edge clusters. As a result, the system increases the computational rate and improves accuracy through responsible resource utilization and active-passive learning. We achieved low latency and higher accuracy for object identification through real-time multimedia data objectification.

RevDate: 2022-06-10

Jiao Z, Zhou F, Wang Q, et al (2022)

RPVC: A Revocable Publicly Verifiable Computation Solution for Edge Computing.

Sensors (Basel, Switzerland), 22(11): pii:s22114012.

With publicly verifiable computation (PVC) development, users with limited resources prefer to outsource computing tasks to cloud servers. However, existing PVC schemes are mainly proposed for cloud computing scenarios, which brings bandwidth consumption or network delay of IoT devices in edge computing. In addition, dishonest edge servers may reduce resource utilization by returning unreliable results. Therefore, we propose a revocable publicly verifiable computation(RPVC) scheme for edge computing. On the one hand, RPVC ensures that users can verify the correct results at a small cost. On the other hand, it can revoke the computing abilities of dishonest edge servers. First, polynomial commitments are employed to reduce proofs' length and generation speed. Then, we improve revocable group signature by knowledge signatures and subset covering theory. This makes it possible to revoke dishonest edge servers. Finally, theoretical analysis proves that RPVC has correctness and security, and experiments evaluate the efficiency of RPVC.

RevDate: 2022-06-09

Loo WK, Hasikin K, Suhaimi A, et al (2022)

Systematic Review on COVID-19 Readmission and Risk Factors: Future of Machine Learning in COVID-19 Readmission Studies.

Frontiers in public health, 10:898254.

In this review, current studies on hospital readmission due to infection of COVID-19 were discussed, compared, and further evaluated in order to understand the current trends and progress in mitigation of hospital readmissions due to COVID-19. Boolean expression of ("COVID-19" OR "covid19" OR "covid" OR "coronavirus" OR "Sars-CoV-2") AND ("readmission" OR "re-admission" OR "rehospitalization" OR "rehospitalization") were used in five databases, namely Web of Science, Medline, Science Direct, Google Scholar and Scopus. From the search, a total of 253 articles were screened down to 26 articles. In overall, most of the research focus on readmission rates than mortality rate. On the readmission rate, the lowest is 4.2% by Ramos-Martínez et al. from Spain, and the highest is 19.9% by Donnelly et al. from the United States. Most of the research (n = 13) uses an inferential statistical approach in their studies, while only one uses a machine learning approach. The data size ranges from 79 to 126,137. However, there is no specific guide to set the most suitable data size for one research, and all results cannot be compared in terms of accuracy, as all research is regional studies and do not involve data from the multi region. The logistic regression is prevalent in the research on risk factors of readmission post-COVID-19 admission, despite each of the research coming out with different outcomes. From the word cloud, age is the most dominant risk factor of readmission, followed by diabetes, high length of stay, COPD, CKD, liver disease, metastatic disease, and CAD. A few future research directions has been proposed, including the utilization of machine learning in statistical analysis, investigation on dominant risk factors, experimental design on interventions to curb dominant risk factors and increase the scale of data collection from single centered to multi centered.

RevDate: 2022-06-09

Ghosh S, A Mukherjee (2022)

STROVE: spatial data infrastructure enabled cloud-fog-edge computing framework for combating COVID-19 pandemic.

Innovations in systems and software engineering pii:458 [Epub ahead of print].

The outbreak of 2019 novel coronavirus (COVID-19) has triggered unprecedented challenges and put the whole world in a parlous condition. The impacts of COVID-19 is a matter of grave concern in terms of fatality rate, socio-economical condition, health infrastructure. It is obvious that only pharmaceutical solutions (vaccine) cannot eradicate this pandemic completely, and effective strategies regarding lockdown measures, restricted mobility, emergency services to users-in brief data-driven decision system is of utmost importance. This necessitates an efficient data analytics framework, data infrastructure to store, manage pandemic related information, and distributed computing platform to support such data-driven operations. In the past few decades, Internet of Things-based devices and applications have emerged significantly in various sectors including healthcare and time-critical applications. To be specific, health-sensors help to accumulate health-related parameters at different time-instances of a day, the movement sensors keep track of mobility traces of the user, and helps to assist them in varied conditions. The smartphones are equipped with several such sensors and the ability of low-cost connected sensors to cover large areas makes it the most useful component to combat pandemics such as COVID-19. However, analysing and managing the huge amount of data generated by these sensors is a big challenge. In this paper we have proposed a unified framework which has three major components: (i) Spatial Data Infrastructure to manage, store, analyse and share spatio-temporal information with stakeholders efficiently, (ii) Cloud-Fog-Edge-based hierarchical architecture to support preliminary diagnosis, monitoring patients' mobility, health parameters and activities while they are in quarantine or home-based treatment, and (iii) Assisting users in varied emergency situation leveraging efficient data-driven techniques at low-latency and energy consumption. The mobility data analytics along with SDI is required to interpret the movement dynamics of the region and correlate with COVID-19 hotspots. Further, Cloud-Fog-Edge-based system architecture is required to provision healthcare services efficiently and in timely manner. The proposed framework yields encouraging results in taking decisions based on the COVID-19 context and assisting users effectively by enhancing accuracy of detecting suspected infected people by ∼ 24% and reducing delay by ∼ 55% compared to cloud-only system.

RevDate: 2022-06-09

Zhang Y, Zhao H, D Peng (2022)

Exploration and Research on Smart Sports Classrooms in Colleges in the Information Age.

Applied bionics and biomechanics, 2022:2970496.

Smart classrooms, made possible by the growing use of Internet information technology in the sphere of education, as one of the important foundations for the realization of smart education, have become the current hot direction of the development of educational information innovation and intend to propose some ideas and directions for smart sports teaching research in IA colleges and universities. The smart classroom is an intelligent and efficient classroom created by the "Internet +" way of thinking and the new generation of information technologies such as big data and cloud computing. This article puts forward the exploratory research methods of smart sports classrooms in colleges and universities in the IA, methods, such as document retrieval, expert interviews, questionnaire surveys, and practical research, and field investigation method, which are used in the exploration and research of college smart sports classrooms in the IA experiment. According to the findings of this study, 96.34 percent of students have a positive attitude toward the smart sports classroom teaching model, which is favorable to the growth of smart sports classroom teaching.

RevDate: 2022-06-09

Nair R, Zafrullah SN, Vinayasree P, et al (2022)

Blockchain-Based Decentralized Cloud Solutions for Data Transfer.

Computational intelligence and neuroscience, 2022:8209854.

Cloud computing has increased its service area and user experience above traditional platforms through virtualization and resource integration, resulting in substantial economic and societal advantages. Cloud computing is experiencing a significant security and trust dilemma, requiring a trust-enabled transaction environment. The typical cloud trust model is centralized, resulting in high maintenance costs, network congestion, and even single-point failure. Also, due to a lack of openness and traceability, trust rating findings are not universally acknowledged. "Blockchain is a novel, decentralised computing system. Its unique operational principles and record traceability assure the transaction data's integrity, undeniability, and security. So, blockchain is ideal for building a distributed and decentralised trust infrastructure. This study addresses the difficulty of transferring data and related permission policies from the cloud to the distributed file systems (DFS). Our aims include moving the data files from the cloud to the distributed file system and developing a cloud policy. This study addresses the difficulty of transferring data and related permission policies from the cloud to the DFS. In DFS, no node is given the privilege, and storage of all the data is dependent on content-addressing. The data files are moved from Amazon S3 buckets to the interplanetary file system (IPFS). In DFS, no node is given the privilege, and storage of all the data is dependent on content-addressing.

RevDate: 2022-06-08

Anderson B, Cameron J, Jefferson U, et al (2022)

Designing a Cloud-Based System for Affordable Cyberinfrastructure to Support Software-Based Research.

Studies in health technology and informatics, 290:489-493.

Interest in cloud-based cyberinfrastructure among higher-education institutions is growing rapidly, driven by needs to realize cost savings and access enhanced computing resources. Through a nonprofit entity, we have created a platform that provides hosting and software support services enabling researchers to responsibly build on cloud technologies. However, there are technical, logistic, and administrative challenges if this platform is to support all types of research. Software-enhanced research is distinctly different from industry applications, typically characterized by needs for lower reduced availability, greater flexibility, and fewer resources for upkeep costs. We describe a swarm environment specifically designed for research in academic settings and our experience developing an operating model for sustainable cyberinfrastructure. We also present three case studies illustrating the types of applications supported by the cyberinfrastructure and explore techniques that address specific application needs. Our findings demonstrate safer, faster, cheaper cloud services by recognizing the intrinsic properties of academic research environments.

RevDate: 2022-06-08

Ruokolainen J, Haladijan J, Juutinen M, et al (2022)

Mobilemicroservices Architecture for Remote Monitoring of Patients: A Feasibility Study.

Studies in health technology and informatics, 290:200-204.

Recent developments in smart mobile devices (SMDs), wearable sensors, the Internet, mobile networks, and computing power provide new healthcare opportunities that are not restricted geographically. This paper aims to introduce Mobilemicroservices Architecture (MMA) based on a study on architectures. In MMA, an HTTP-based Mobilemicroservivce (MM) is allocated to each SMD's sensor. The key benefits are extendibility, scalability, ease of use for the patient, security, and the possibility to collect raw data without the necessity to involve cloud services. Feasibility was investigated in a two-year project, where MMA-based solutions were used to collect motor function data from patients with Parkinson's disease. First, we collected motor function data from 98 patients and healthy controls during their visit to a clinic. Second, we monitored the same subjects in real-time for three days in their everyday living environment. These MMA applications represent HTTP-based business-logic computing in which the SMDs' resources are accessible globally.

RevDate: 2022-06-07

Khan NJ, Ahamad G, M Naseem (2022)

An IoT/FOG based framework for sports talent identification in COVID-19 like situations.

International journal of information technology : an official journal of Bharati Vidyapeeth's Institute of Computer Applications and Management pii:984 [Epub ahead of print].

COVID-19 crippled all the domains of our society. The inevitable lockdowns and social distancing procedures have hit the process of traditional sports talent identification (TiD) severely. This will interrupt the career-excellency of athletes and will also affect the future talent in the years to come. We explore the effect of COVID-19 on sports talent identification and propose an IoT/Fog-based framework for theTiD process during COVID-19 and COVID-like situations. Our proposed novel six-layer model facilitates the sports talent identification remotely using the various latest Information and Communication Technologies like IoT, fog and cloud computing. All the stakeholders like experts, coaches, players, institutes etc. are taken into consideration. The framework is mobile, widely accessible, scalable, cost-effective, secure, platform/location independent and fast. A brief case study of cricket talent identification using the proposed framework is also provided.

RevDate: 2022-06-07

Li K (2022)

Application of Artificial Intelligence System Based on Wireless Sensor Network in Enterprise Management.

Computational intelligence and neuroscience, 2022:2169521.

With the improvement of the ability to acquire natural information, wireless sensor networks also need to transmit corresponding information in terms of collecting information. Wireless sensor nodes have great application prospects as a key component of wireless sensors. Therefore, different wireless sensors play an important decisive role in the operation of wireless network applications. With the continuous development of wireless sensor networks, existing wireless sensor network nodes exhibit limitations and shortcomings such as inflexible structure, low variability, and low versatility. Specifically, the learning and neural networks obtained by different artificial intelligence expert systems in computing technology are different. On the one hand, it can meet the needs of users for information systems to a certain extent, and on the other hand, it can also help accelerate the development of computer science. At present, the new generation of information technology industry is listed in the seven emerging strategic industries of the country. The new cloud computing technology has gradually expanded to important corporate governance capabilities in terms of information technology. The intelligent application of cloud computing technology replaces traditional enterprise management technology. Efficiency management and risk management can improve the quality and business capabilities of the entire enterprise, improve system applications according to the actual situation of the enterprise, improve system applications, and implement health and the sustainable development of the enterprise, thereby promoting the sustainable development of the computer technology industry.

RevDate: 2022-06-07

Yang M, Gao C, J Han (2022)

Edge Computing Deployment Algorithm and Sports Training Data Mining Based on Software Defined Network.

Computational intelligence and neuroscience, 2022:8056360.

The wireless sensor network collects data from various areas through specific network nodes and uploads it to the decision-making layer for analysis and processing. Therefore, it has become a perception network of the Internet of Things and has made great achievements in monitoring and prevention at this stage. At this stage, the main problem is the motive power of sensor nodes, so the energy storage and transmission of wireless sensor network is imminent. Mobile edge computing technology provides a new type of technology for today's edge networks, enabling it to process resource-intensive data blocks and feedback to managers in time. It is a new starting point for cloud computing services, compared to traditional cloud computing services. The transmission speed is more efficient and will be widely used in various industries and serve them in the future. Among them, education and related industries urgently need in-depth information, which in turn promotes the rapid development of data mining by sensor networks. This article focuses on data mining technology, mainly expounds the meaning and main mining methods of data mining technology, and conducts data mining on sports training requirements from the aspects of demand collection and analysis, algorithm design and optimization, demand results and realization, etc. Monitor the training status and give the trainer reasonable suggestions. Through the processing of the training data mining results and proofreading the database standardized training data, we can formulate a personalized program suitable for sportsmen, reduce sports injuries caused by no trainer's guidance, and open new doors for training modes. Therefore, this paper studies the sensor network technology, edge computing deployment algorithm, and sports training data mining.

RevDate: 2022-06-07

Zhong M, Ali M, Faqir K, et al (2022)

China Pakistan Economic Corridor Digital Transformation.

Frontiers in psychology, 13:887848.

The China-Pakistan Economic Corridor (CPEC) vision and mission are to improve the people's living standards of Pakistan and China through bilateral investments, trade, cultural exchanges, and economic activities. To achieve this envisioned dream, Pakistan established the China-Pakistan Economic Corridor Authority (CPECA) to further its completion, but Covid-19 slowed it down. This situation compelled the digitalization of CPEC. This article reviews the best practices and success stories of various digitalization and e-governance programs and, in this light, advises the implementation of the Ajman Digital Governance (ADG) model as a theoretical framework for CPEC digitalization. This article concludes that the Pakistani government needs to transform CPEC digitalization by setting up the CPEC Digitalization and Transformation Center (DTC) at the CPECA office to attract more investors and businesses.

RevDate: 2022-06-07

Butt UA, Amin R, Aldabbas H, et al (2022)

Cloud-based email phishing attack using machine and deep learning algorithm.

Complex & intelligent systems pii:760 [Epub ahead of print].

Cloud computing refers to the on-demand availability of personal computer system assets, specifically data storage and processing power, without the client's input. Emails are commonly used to send and receive data for individuals or groups. Financial data, credit reports, and other sensitive data are often sent via the Internet. Phishing is a fraudster's technique used to get sensitive data from users by seeming to come from trusted sources. The sender can persuade you to give secret data by misdirecting in a phished email. The main problem is email phishing attacks while sending and receiving the email. The attacker sends spam data using email and receives your data when you open and read the email. In recent years, it has been a big problem for everyone. This paper uses different legitimate and phishing data sizes, detects new emails, and uses different features and algorithms for classification. A modified dataset is created after measuring the existing approaches. We created a feature extracted comma-separated values (CSV) file and label file, applied the support vector machine (SVM), Naive Bayes (NB), and long short-term memory (LSTM) algorithm. This experimentation considers the recognition of a phished email as a classification issue. According to the comparison and implementation, SVM, NB and LSTM performance is better and more accurate to detect email phishing attacks. The classification of email attacks using SVM, NB, and LSTM classifiers achieve the highest accuracy of 99.62%, 97% and 98%, respectively.

RevDate: 2022-06-06

Kumar RR, Tomar A, Shameem M, et al (2022)

OPTCLOUD: An Optimal Cloud Service Selection Framework Using QoS Correlation Lens.

Computational intelligence and neuroscience, 2022:2019485.

Cloud computing has grown as a computing paradigm in the last few years. Due to the explosive increase in the number of cloud services, QoS (quality of service) becomes an important factor in service filtering. Moreover, it becomes a nontrivial problem when comparing the functionality of cloud services with different performance metrics. Therefore, optimal cloud service selection is quite challenging and extremely important for users. In the existing approaches of cloud service selection, the user's preferences are offered by the user in a quantitative form. With fuzziness and subjectivity, it is a hurdle task for users to express clear preferences. Moreover, many QoS attributes are not independent but interrelated; therefore, the existing weighted summation method cannot accommodate correlations among QoS attributes and produces inaccurate results. To resolve this problem, we propose a cloud service framework that takes the user's preferences and chooses the optimal cloud service based on the user's QoS constraints. We propose a cloud service selection algorithm, based on principal component analysis (PCA) and the best-worst method (BWM), which eliminates the correlations between QoS and provides the best cloud services with the best QoS values for users. In the end, a numerical example is shown to validate the effectiveness and feasibility of the proposed methodology.

RevDate: 2022-06-07

Shao D, Kellogg G, Mahony S, et al (2020)

PEGR: a management platform for ChIP-based next generation sequencing pipelines.

PEARC20 : Practice and Experience in Advanced Research Computing 2020 : Catch the wave : July 27-31, 2020, Portland, Or Virtual Conference. Practice and Experience in Advanced Research Computing (Conference) (2020 : Online), 2020:285-292.

There has been a rapid development in genome sequencing, including high-throughput next generation sequencing (NGS) technologies, automation in biological experiments, new bioinformatics tools and utilization of high-performance computing and cloud computing. ChIP-based NGS technologies, e.g. ChIP-seq and ChIP-exo, are widely used to detect the binding sites of DNA-interacting proteins in the genome and help us to have a deeper mechanistic understanding of genomic regulation. As sequencing data is generated at an unprecedented pace from the ChIP-based NGS pipelines, there is an urgent need for a metadata management system. To meet this need, we developed the Platform for Eukaryotic Genomic Regulation (PEGR), a web service platform that logs metadata for samples and sequencing experiments, manages the data processing workflows, and provides reporting and visualization. PEGR links together people, samples, protocols, DNA sequencers and bioinformatics computation. With the help of PEGR, scientists can have a more integrated understanding of the sequencing data and better understand the scientific mechanisms of genomic regulation. In this paper, we present the architecture and the major functionalities of PEGR. We also share our experience in developing this application and discuss the future directions.

RevDate: 2022-06-03

Ma S, ZP Liu (2022)

Machine learning potential era of zeolite simulation.

Chemical science, 13(18):5055-5068 pii:d2sc01225a.

Zeolites, owing to their great variety and complexity in structure and wide applications in chemistry, have long been the hot topic in chemical research. This perspective first presents a short retrospect of theoretical investigations on zeolites using the tools from classical force fields to quantum mechanics calculations and to the latest machine learning (ML) potential simulations. ML potentials as the next-generation technique for atomic simulation open new avenues to simulate and interpret zeolite systems and thus hold great promise for finally predicting the structure-functionality relation of zeolites. Recent advances using ML potentials are then summarized from two main aspects: the origin of zeolite stability and the mechanism of zeolite-related catalytic reactions. We also discussed the possible scenarios of ML potential application aiming to provide instantaneous and easy access of zeolite properties. These advanced applications could now be accomplished by combining cloud-computing-based techniques with ML potential-based atomic simulations. The future development of ML potentials for zeolites in the respects of improving the calculation accuracy, expanding the application scope and constructing the zeolite-related datasets is finally outlooked.

RevDate: 2022-06-02

Francini S, G Chirici (2022)

A Sentinel-2 derived dataset of forest disturbances occurred in Italy between 2017 and 2020.

Data in brief, 42:108297 pii:S2352-3409(22)00499-1.

Forests absorb 30% of human emissions associated with fossil fuel burning. For this reason, forest disturbances monitoring is needed for assessing greenhouse gas balance. However, in several countries, the information regarding the spatio-temporal distribution of forest disturbances is missing. Remote sensing data and the new Sentinel-2 satellite missions, in particular, represent a game-changer in this topic. Here we provide a spatially explicit dataset (10-meters resolution) of Italian forest disturbances and magnitude from 2017 to 2020 constructed using Sentinel-2 level-1C imagery and exploiting the Google Earth Engine GEE implementation of the 3I3D algorithm. For each year between 2017 and 2020, we provide three datasets: (i) a magnitude of the change map (between 0 and 255), (ii) a categorical map of forest disturbances, and (iii) a categorical map obtained by stratification of the previous maps that can be used to estimate the areas of several different forest disturbances. The data we provide represent the state-of-the-art for Mediterranean ecosystems in terms of omission and commission errors, they support greenhouse gas balance, forest sustainability assessment, and decision-makers forest managing, they help forest companies to monitor forest harvestings activity over space and time, and, supported by reference data, can be used to obtain the national estimates of forest harvestings and disturbances that Italy is called upon to provide.

RevDate: 2022-06-01

Sakshuwong S, Weir H, Raucci U, et al (2022)

Bringing chemical structures to life with augmented reality, machine learning, and quantum chemistry.

The Journal of chemical physics, 156(20):204801.

Visualizing 3D molecular structures is crucial to understanding and predicting their chemical behavior. However, static 2D hand-drawn skeletal structures remain the preferred method of chemical communication. Here, we combine cutting-edge technologies in augmented reality (AR), machine learning, and computational chemistry to develop MolAR, an open-source mobile application for visualizing molecules in AR directly from their hand-drawn chemical structures. Users can also visualize any molecule or protein directly from its name or protein data bank ID and compute chemical properties in real time via quantum chemistry cloud computing. MolAR provides an easily accessible platform for the scientific community to visualize and interact with 3D molecular structures in an immersive and engaging way.

RevDate: 2022-06-01

Sauber AM, El-Kafrawy PM, Shawish AF, et al (2021)

A New Secure Model for Data Protection over Cloud Computing.

Computational intelligence and neuroscience, 2021:8113253.

The main goal of any data storage model on the cloud is accessing data in an easy way without risking its security. A security consideration is a major aspect in any cloud data storage model to provide safety and efficiency. In this paper, we propose a secure data protection model over the cloud. The proposed model presents a solution to some security issues of cloud such as data protection from any violations and protection from a fake authorized identity user, which adversely affects the security of the cloud. This paper includes multiple issues and challenges with cloud computing that impairs security and privacy of data. It presents the threats and attacks that affect data residing in the cloud. Our proposed model provides the benefits and effectiveness of security in cloud computing such as enhancement of the encryption of data in the cloud. It provides security and scalability of data sharing for users on the cloud computing. Our model achieves the security functions over cloud computing such as identification and authentication, authorization, and encryption. Also, this model protects the system from any fake data owner who enters malicious information that may destroy the main goal of cloud services. We develop the one-time password (OTP) as a logging technique and uploading technique to protect users and data owners from any fake unauthorized access to the cloud. We implement our model using a simulation of the model called Next Generation Secure Cloud Server (NG-Cloud). These results increase the security protection techniques for end user and data owner from fake user and fake data owner in the cloud.

RevDate: 2022-06-01

Algani YMA, Boopalan DK, Elangovan DG, et al (2022)

Autonomous Service for Managing Real Time Notification in Detection of Covid-19 Virus.

Computers & electrical engineering : an international journal pii:S0045-7906(22)00370-6 [Epub ahead of print].

In today's world, the most prominent public issue in the field of medicine is the rapid spread of viral sickness. The seriousness of the disease lies in its fast spreading nature. The main aim of the study is the proposal of a framework for the earlier detection and forecasting of the COVID-19 virus infection among the people to avoid the spread of the disease across the world by undertaking the precautionary measures. According to this framework, there are four stages for the proposed work. This includes the collection of necessary data followed by the classification of the collected information which is then taken in the process of mining and extraction and eventually ending with the process of decision modelling. Since the frequency of the infection is very often a prescient one, the probabilistic examination is measured as a degree of membership characterised by the fever measure related to the same. The predictions are thereby realised using the temporal RNN. The model finally provides effective outcomes in the efficiency of classification, reliability, the prediction viability etc.

RevDate: 2022-06-01

Rudrapati R (2022)

Using industrial 4.0 technologies to combat the COVID-19 pandemic.

Annals of medicine and surgery (2012), 78:103811.

The COVID 19 (Coronavirus) pandemic has led to a surge in the demand for healthcare devices, pre-cautions, or medicines along with advanced information technology. It has become a global mission to control the Coronavirus to prevent the death of innocent people. The fourth industrial revolution (I4.0) is a new approach to thinking that is proposed across a wide range of industries and services to achieve greater success and quality of life. Several initiatives associated with industry 4.0 are expected to make a difference in the fight against COVID-19. Implementing I4.0 components effectively could lead to a reduction in barriers between patients and healthcare workers and could result in improved communication between them. The present study aims to review the components of I4.0 and related tools used to combat the Coronavirus. This article highlights the benefits of each component of the I4.0, which is useful in controlling the spread of COVID-19. From the present study, it is stated that I4.0 technologies could provide an effective solution to deal with local as well as global medical crises in an innovative way.

RevDate: 2022-05-31

Wang C, M Zhang (2022)

The road to change: Broadband China strategy and enterprise digitization.

PloS one, 17(5):e0269133 pii:PONE-D-21-39394.

The digitization of a company necessitates not only the effort of the company but also state backing of network infrastructure. In this study, we applied the difference-in-differences method to examine the impact of the Broadband China Strategy on corporate digitalization and its heterogeneity using the data from Chinese listed firms from 2010 to 2020. The results show that the improvement in network infrastructure plays a vital role in promoting company digitization; this improvement is extremely varied due to variances in market demand and endowments. Non-state-owned firms, businesses in the eastern area, and technology-intensive businesses have profited the most. Among the five types of digitization, artificial intelligence and cloud computing are top priorities for enterprises. Our findings add to the literature on the spillover effects of broadband construction and the factors affecting enterprise digitalization.

RevDate: 2022-05-31

Martín A, D Camacho (2022)

Recent advances on effective and efficient deep learning-based solutions.

This editorial briefly analyses, describes, and provides a short summary of a set of selected papers published in a special issue focused on deep learning methods and architectures and their application to several domains and research areas. The set of selected and published articles covers several aspects related to two basic aspects in deep learning (DL) methods, efficiency of the models and effectiveness of the architectures These papers revolve around different interesting application domains such as health (e.g. cancer, polyps, melanoma, mental health), wearable technologies solar irradiance, social networks, cloud computing, wind turbines, object detection, music, and electricity, among others. This editorial provides a short description of each published article and a brief analysis of their main contributions.

RevDate: 2022-05-31

Yan EG, NH Arzt (2022)

A Commentary on Process Improvements to Reduce Manual Tasks and Paper at Covid-19 Mass Vaccination Points of Dispensing in California.

Journal of medical systems, 46(7):47.

My Turn is software used to manage several Covid-19 mass vaccination campaigns in California. The objective of this article is to describe the use of My Turn at two points of dispensing in California and comment on process improvements to reduce manual tasks of six identified processes of vaccination-registration, scheduling, administration, documentation, follow-up, and digital vaccine record-and paper. We reviewed publicly available documents of My Turn and patients vaccinated at George R. Moscone Convention Center in San Francisco and Oakland Coliseum Community Vaccination Clinic. For publicly available documents of My Turn, we examined videos of My Turn on YouTube, and documentation from EZIZ, the website for the California Vaccines for Children Program. For patients, we examined publicly available vaccination record cards on Instagram and Google. At the George R. Moscone Convention Center, 329,608 vaccines doses were given. At the Oakland Coliseum Community Vaccination Clinic, more than 500,000 vaccine doses were administered. The use of My Turn can be used to reduce manual tasks and paper for mass vaccinating patients against Covid-19.

RevDate: 2022-05-31

Rahmani MKI, Shuaib M, Alam S, et al (2022)

Blockchain-Based Trust Management Framework for Cloud Computing-Based Internet of Medical Things (IoMT): A Systematic Review.

Computational intelligence and neuroscience, 2022:9766844.

The internet of medical things (IoMT) is a smart medical device structure that includes apps, health services, and systems. These medical equipment and applications are linked to healthcare systems via the internet. Because IoT devices lack computational power, the collected data can be processed and analyzed in the cloud by more computationally intensive tools. Cloud computing in IoMT is also used to store IoT data as part of a collaborative effort. Cloud computing has provided new avenues for providing services to users with better user experience, scalability, and proper resource utilization compared to traditional platforms. However, these cloud platforms are susceptible to several security breaches evident from recent and past incidents. Trust management is a crucial feature required for providing secure and reliable service to users. The traditional trust management protocols in the cloud computing situation are centralized and result in single-point failure. Blockchain has emerged as the possible use case for the domain that requires trust and reliability in several aspects. Different researchers have presented various blockchain-based trust management approaches. This study reviews the trust challenges in cloud computing and analyzes how blockchain technology addresses these challenges using blockchain-based trust management frameworks. There are ten (10) solutions under two broad categories of decentralization and security. These challenges are centralization, huge overhead, trust evidence, less adaptive, and inaccuracy. This systematic review has been performed in six stages: identifying the research question, research methods, screening the related articles, abstract and keyword examination, data retrieval, and mapping processing. Atlas.ti software is used to analyze the relevant articles based on keywords. A total of 70 codes and 262 quotations are compiled, and furthermore, these quotations are categorized using manual coding. Finally, 20 solutions under two main categories of decentralization and security were retrieved. Out of these ten (10) solutions, three (03) fell in the security category, and the rest seven (07) came under the decentralization category.

RevDate: 2022-05-31

Ni Q (2022)

Deep Neural Network Model Construction for Digital Human Resource Management with Human-Job Matching.

Computational intelligence and neuroscience, 2022:1418020.

This article uses deep neural network technology and combines digital HRM knowledge to research human-job matching systematically. Through intelligent digital means such as 5G communication, cloud computing, big data, neural network, and user portrait, this article proposes the design of the corresponding digital transformation strategy of HRM. This article further puts forward the guaranteed measures in enhancing HRM thinking and establishing HRM culture to ensure the smooth implementation of the digital transformation strategy of the HRM. This system uses charts for data visualization and flask framework for background construction, and the data is stored through CSV files, My SQL, and configuration files. The system is based on a deep learning algorithm for job applicant matching, intelligent recommendation of jobs for job seekers, and more real help for job applicants to apply for jobs. The job intelligent recommendation algorithm partly adopts bidirectional long and short-term memory neural network (Bi-LSTM) and the word-level human post-matching neural network APJFNN built by the attention mechanism. By embedding the text representation of job demand information into the representation vector of public space, a joint embedded convolutional neural network (JE-CNN) for post matching analysis is designed and implemented. The quantitative analysis method analyzes the degree of matching with the job.

RevDate: 2022-05-28

Umoren O, Singh R, Pervez Z, et al (2022)

Securing Fog Computing with a Decentralised User Authentication Approach Based on Blockchain.

Sensors (Basel, Switzerland), 22(10): pii:s22103956.

The use of low-cost sensors in IoT over high-cost devices has been considered less expensive. However, these low-cost sensors have their own limitations such as the accuracy, quality, and reliability of the data collected. Fog computing offers solutions to those limitations; nevertheless, owning to its intrinsic distributed architecture, it faces challenges in the form of security of fog devices, secure authentication and privacy. Blockchain technology has been utilised to offer solutions for the authentication and security challenges in fog systems. This paper proposes an authentication system that utilises the characteristics and advantages of blockchain and smart contracts to authenticate users securely. The implemented system uses the email address, username, Ethereum address, password and data from a biometric reader to register and authenticate users. Experiments showed that the proposed method is secure and achieved performance improvement when compared to existing methods. The comparison of results with state-of-the-art showed that the proposed authentication system consumed up to 30% fewer resources in transaction and execution cost; however, there was an increase of up to 30% in miner fees.

RevDate: 2022-05-28

Wu TY, Meng Q, Kumari S, et al (2022)

Rotating behind Security: A Lightweight Authentication Protocol Based on IoT-Enabled Cloud Computing Environments.

Sensors (Basel, Switzerland), 22(10): pii:s22103858.

With the rapid development of technology based on the Internet of Things (IoT), numerous IoT devices are being used on a daily basis. The rise in cloud computing plays a crucial role in solving the resource constraints of IoT devices and in promoting resource sharing, whereby users can access IoT services provided in various environments. However, this complex and open wireless network environment poses security and privacy challenges. Therefore, designing a secure authentication protocol is crucial to protecting user privacy in IoT services. In this paper, a lightweight authentication protocol was designed for IoT-enabled cloud computing environments. A real or random model, and the automatic verification tool ProVerif were used to conduct a formal security analysis. Its security was further proved through an informal analysis. Finally, through security and performance comparisons, our protocol was confirmed to be relatively secure and to display a good performance.

RevDate: 2022-05-28

Alnaim AK, Alwakeel AM, EB Fernandez (2022)

Towards a Security Reference Architecture for NFV.

Sensors (Basel, Switzerland), 22(10): pii:s22103750.

Network function virtualization (NFV) is an emerging technology that is becoming increasingly important due to its many advantages. NFV transforms legacy hardware-based network infrastructure into software-based virtualized networks. This transformation increases the flexibility and scalability of networks, at the same time reducing the time for the creation of new networks. However, the attack surface of the network increases, which requires the definition of a clear map of where attacks may happen. ETSI standards precisely define many security aspects of this architecture, but these publications are very long and provide many details which are not of interest to software architects. We start by conducting threat analysis of some of the NFV use cases. The use cases serve as scenarios where the threats to the architecture can be enumerated. Representing threats as misuse cases that describe the modus operandi of attackers, we can find countermeasures to them in the form of security patterns, and we can build a security reference architecture (SRA). Until now, only imprecise models of NFV architectures existed; by making them more detailed and precise it is possible to handle not only security but also safety and reliability, although we do not explore those aspects. Because security is a global property that requires a holistic approach, we strongly believe that architectural models are fundamental to produce secure networks and allow us to build networks which are secure by design. The resulting SRA defines a roadmap to implement secure concrete architectures.

RevDate: 2022-05-28

Makarichev V, Lukin V, Illiashenko O, et al (2022)

Digital Image Representation by Atomic Functions: The Compression and Protection of Data for Edge Computing in IoT Systems.

Sensors (Basel, Switzerland), 22(10): pii:s22103751.

Digital images are used in various technological, financial, economic, and social processes. Huge datasets of high-resolution images require protected storage and low resource-intensive processing, especially when applying edge computing (EC) for designing Internet of Things (IoT) systems for industrial domains such as autonomous transport systems. For this reason, the problem of the development of image representation, which provides compression and protection features in combination with the ability to perform low complexity analysis, is relevant for EC-based systems. Security and privacy issues are important for image processing considering IoT and cloud architectures as well. To solve this problem, we propose to apply discrete atomic transform (DAT) that is based on a special class of atomic functions generalizing the well-known up-function of V.A. Rvachev. A lossless image compression algorithm based on DAT is developed, and its performance is studied for different structures of DAT. This algorithm, which combines low computational complexity, efficient lossless compression, and reliable protection features with convenient image representation, is the main contribution of the paper. It is shown that a sufficient reduction of memory expenses can be obtained. Additionally, a dependence of compression efficiency measured by compression ratio (CR) on the structure of DAT applied is investigated. It is established that the variation of DAT structure produces a minor variation of CR. A possibility to apply this feature to data protection and security assurance is grounded and discussed. In addition, a structure or file for storing the compressed and protected data is proposed, and its properties are considered. Multi-level structure for the application of atomic functions in image processing and protection for EC in IoT systems is suggested and analyzed.

RevDate: 2022-05-28

Hossain MD, Sultana T, Hossain MA, et al (2022)

Dynamic Task Offloading for Cloud-Assisted Vehicular Edge Computing Networks: A Non-Cooperative Game Theoretic Approach.

Sensors (Basel, Switzerland), 22(10): pii:s22103678.

Vehicular edge computing (VEC) is one of the prominent ideas to enhance the computation and storage capabilities of vehicular networks (VNs) through task offloading. In VEC, the resource-constrained vehicles offload their computing tasks to the local road-side units (RSUs) for rapid computation. However, due to the high mobility of vehicles and the overloaded problem, VEC experiences a great deal of challenges when determining a location for processing the offloaded task in real time. As a result, this degrades the quality of vehicular performance. Therefore, to deal with these above-mentioned challenges, an efficient dynamic task offloading approach based on a non-cooperative game (NGTO) is proposed in this study. In the NGTO approach, each vehicle can make its own strategy on whether a task is offloaded to a multi-access edge computing (MEC) server or a cloud server to maximize its benefits. Our proposed strategy can dynamically adjust the task-offloading probability to acquire the maximum utility for each vehicle. However, we used a best response offloading strategy algorithm for the task-offloading game in order to achieve a unique and stable equilibrium. Numerous simulation experiments affirm that our proposed scheme fulfills the performance guarantees and can reduce the response time and task-failure rate by almost 47.6% and 54.6%, respectively, when compared with the local RSU computing (LRC) scheme. Moreover, the reduced rates are approximately 32.6% and 39.7%, respectively, when compared with a random offloading scheme, and approximately 26.5% and 28.4%, respectively, when compared with a collaborative offloading scheme.

RevDate: 2022-05-28

Sepulveda F, Thangraj JS, J Pulliam (2022)

The Edge of Exploration: An Edge Storage and Computing Framework for Ambient Noise Seismic Interferometry Using Internet of Things Based Sensor Networks.

Sensors (Basel, Switzerland), 22(10): pii:s22103615.

Recent technological advances have reduced the complexity and cost of developing sensor networks for remote environmental monitoring. However, the challenges of acquiring, transmitting, storing, and processing remote environmental data remain significant. The transmission of large volumes of sensor data to a centralized location (i.e., the cloud) burdens network resources, introduces latency and jitter, and can ultimately impact user experience. Edge computing has emerged as a paradigm in which substantial storage and computing resources are located at the "edge" of the network. In this paper, we present an edge storage and computing framework leveraging commercially available components organized in a tiered architecture and arranged in a hub-and-spoke topology. The framework includes a popular distributed database to support the acquisition, transmission, storage, and processing of Internet-of-Things-based sensor network data in a field setting. We present details regarding the architecture, distributed database, embedded systems, and topology used to implement an edge-based solution. Lastly, a real-world case study (i.e., seismic) is presented that leverages the edge storage and computing framework to acquire, transmit, store, and process millions of samples of data per hour.

RevDate: 2022-05-28

Silva P, Dahlke DV, Smith ML, et al (2022)

An Idealized Clinicogenomic Registry to Engage Underrepresented Populations Using Innovative Technology.

Journal of personalized medicine, 12(5): pii:jpm12050713.

Current best practices in tumor registries provide a glimpse into a limited time frame over the natural history of disease, usually a narrow window around diagnosis and biopsy. This creates challenges meeting public health and healthcare reimbursement policies that increasingly require robust documentation of long-term clinical trajectories, quality of life, and health economics outcomes. These challenges are amplified for underrepresented minority (URM) and other disadvantaged populations, who tend to view the institution of clinical research with skepticism. Participation gaps leave such populations underrepresented in clinical research and, importantly, in policy decisions about treatment choices and reimbursement, thus further augmenting health, social, and economic disparities. Cloud computing, mobile computing, digital ledgers, tokenization, and artificial intelligence technologies are powerful tools that promise to enhance longitudinal patient engagement across the natural history of disease. These tools also promise to enhance engagement by giving participants agency over their data and addressing a major impediment to research participation. This will only occur if these tools are available for use with all patients. Distributed ledger technologies (specifically blockchain) converge these tools and offer a significant element of trust that can be used to engage URM populations more substantively in clinical research. This is a crucial step toward linking composite cohorts for training and optimization of the artificial intelligence tools for enhancing public health in the future. The parameters of an idealized clinical genomic registry are presented.

RevDate: 2022-05-28

Li J, Gong J, Guldmann JM, et al (2022)

Simulation of Land-Use Spatiotemporal Changes under Ecological Quality Constraints: The Case of the Wuhan Urban Agglomeration Area, China, over 2020-2030.

International journal of environmental research and public health, 19(10): pii:ijerph19106095.

Human activities coupled with land-use change pose a threat to the regional ecological environment. Therefore, it is essential to determine the future land-use structure and spatial layout for ecological protection and sustainable development. Land use simulations based on traditional scenarios do not fully consider ecological protection, leading to urban sprawl. Timely and dynamic monitoring of ecological status and change is vital to managing and protecting urban ecology and sustainable development. Remote sensing indices, including greenness, humidity, dryness, and heat, are calculated annually. This method compensates for data loss and difficulty in stitching remote sensing ecological indices over large-scale areas and long time-series. Herein, a framework is developed by integrating the four above-mentioned indices for a rapid, large-scale prediction of land use/cover that incorporates the protection of high ecological quality zone (HEQZ) land. The Google Earth Engine (GEE) platform is used to build a comprehensive HEQZ map of the Wuhan Urban Agglomeration Area (WUAA). Two scenarios are considered: Ecological protection (EP) based on HEQZ and natural growth (NG) without spatial ecological constraints. Land use/cover in the WUAA is predicted over 2020-2030, using the patch-generating land use simulation (PLUS) model. The results show that: (1) the HEQZ area covers 21,456 km2, accounting for 24% of the WUAA, and is mainly distributed in the Xianning, Huangshi, and Xiantao regions. Construction land has the highest growth rate (5.2%) under the NG scenario. The cropland area decreases by 3.2%, followed by woodlands (0.62%). (2) By delineating the HEQZ, woodlands, rivers, lakes, and wetlands are well protected; construction land displays a downward trend based on the EP scenario with the HEQZ, and the simulated construction land in 2030 is located outside the HEQZ. (3) Image processing based on GEE cloud computing can ameliorate the difficulties of remote sensing data (i.e., missing data, cloudiness, chromatic aberration, and time inconsistency). The results of this study can provide essential scientific guidance for territorial spatial planning under the premise of ecological security.

RevDate: 2022-05-27

Gutz SE, Stipancic KL, Yunusova Y, et al (2022)

Validity of Off-the-Shelf Automatic Speech Recognition for Assessing Speech Intelligibility and Speech Severity in Speakers With Amyotrophic Lateral Sclerosis.

Journal of speech, language, and hearing research : JSLHR [Epub ahead of print].

PURPOSE: There is increasing interest in using automatic speech recognition (ASR) systems to evaluate impairment severity or speech intelligibility in speakers with dysarthria. We assessed the clinical validity of one currently available off-the-shelf (OTS) ASR system (i.e., a Google Cloud ASR API) for indexing sentence-level speech intelligibility and impairment severity in individuals with amyotrophic lateral sclerosis (ALS), and we provided guidance for potential users of such systems in research and clinic.

METHOD: Using speech samples collected from 52 individuals with ALS and 20 healthy control speakers, we compared word recognition rate (WRR) from the commercially available Google Cloud ASR API (Machine WRR) to clinician-provided judgments of impairment severity, as well as sentence intelligibility (Human WRR). We assessed the internal reliability of Machine and Human WRR by comparing the standard deviation of WRR across sentences to the minimally detectable change (MDC), a clinical benchmark that indicates whether results are within measurement error. We also evaluated Machine and Human WRR diagnostic accuracy for classifying speakers into clinically established categories.

RESULTS: Human WRR achieved better accuracy than Machine WRR when indexing speech severity, and, although related, Human and Machine WRR were not strongly correlated. When the speech signal was mixed with noise (noise-augmented ASR) to reduce a ceiling effect, Machine WRR performance improved. Internal reliability metrics were worse for Machine than Human WRR, particularly for typical and mildly impaired severity groups, although sentence length significantly impacted both Machine and Human WRRs.

CONCLUSIONS: Results indicated that the OTS ASR system was inadequate for early detection of speech impairment and grading overall speech severity. While Machine and Human WRR were correlated, ASR should not be used as a one-to-one proxy for transcription speech intelligibility or clinician severity ratings. Overall, findings suggested that the tested OTS ASR system, Google Cloud ASR, has limited utility for grading clinical speech impairment in speakers with ALS.

RevDate: 2022-05-27

Christley S, Stervbo U, Cowell LG, et al (2022)

Immune Repertoire Analysis on High-Performance Computing Using VDJServer V1: A Method by the AIRR Community.

Methods in molecular biology (Clifton, N.J.), 2453:439-446.

AIRR-seq data sets are usually large and require specialized analysis methods and software tools. A typical Illumina MiSeq sequencing run generates 20-30 million 2 × 300 bp paired-end sequence reads, which roughly corresponds to 15 GB of sequence data to be processed. Other platforms like NextSeq, which is useful in projects where the full V gene is not needed, create about 400 million 2 × 150 bp paired-end reads. Because of the size of the data sets, the analysis can be computationally expensive, particularly the early analysis steps like preprocessing and gene annotation that process the majority of the sequence data. A standard desktop PC may take 3-5 days of constant processing for a single MiSeq run, so dedicated high-performance computational resources may be required.VDJServer provides free access to high-performance computing (HPC) at the Texas Advanced Computing Center (TACC) through a graphical user interface (Christley et al. Front Immunol 9:976, 2018). VDJServer is a cloud-based analysis portal for immune repertoire sequence data that provides access to a suite of tools for a complete analysis workflow, including modules for preprocessing and quality control of sequence reads, V(D)J gene assignment, repertoire characterization, and repertoire comparison. Furthermore, VDJServer has parallelized execution for tools such as IgBLAST, so more compute resources are utilized as the size of the input data grows. Analysis that takes days on a desktop PC might take only a few hours on VDJServer. VDJServer is a free, publicly available, and open-source licensed resource. Here, we describe the workflow for performing immune repertoire analysis on VDJServer's high-performance computing.

RevDate: 2022-05-26

Choi IK, Abeysinghe E, Coulter E, et al (2020)

TopPIC Gateway: A Web Gateway for Top-Down Mass Spectrometry Data Interpretation.

PEARC20 : Practice and Experience in Advanced Research Computing 2020 : Catch the wave : July 27-31, 2020, Portland, Or Virtual Conference. Practice and Experience in Advanced Research Computing (Conference) (2020 : Online), 2020:461-464.

Top-down mass spectrometry-based proteomics has become the method of choice for identifying and quantifying intact proteoforms in biological samples. We present a web-based gateway for TopPIC suite, a widely used software suite consisting of four software tools for top-down mass spectrometry data interpretation: TopFD, TopPIC, TopMG, and TopDiff. The gateway enables the community to use heterogeneous collection of computing resources that includes high performance computing clusters at Indiana University and virtual clusters on XSEDE's Jetstream Cloud resource for top-down mass spectral data analysis using TopPIC suite. The gateway will be a useful resource for proteomics researchers and students who have limited access to high-performance computing resources or who are not familiar with interacting with server-side supercomputers.

RevDate: 2022-05-24

Yang Q (2022)

Analysis of English Cultural Teaching Model Based on Machine Learning.

Computational intelligence and neuroscience, 2022:7126758.

According to the world population, nearly five billion people use mobile phones in their daily lives, and this has increased by 20% in the last twelve months compared to the previous report. An average survey conducted by researchers to find the amount of data consumed in a month by every mobile phone in the world has finally resulted in 45 exabytes of data being collected from a single user within a month. In today's world, data consumption and data analytics are being considered as one of the most important necessities for e-commerce companies. With the help of such collected data from a person, it is possible to predict the future signature or activity of the person. If 45 terabytes of data can be stored for a single user, determining the average calculation and amount of data to be collected for five billion users appears to be much more difficult. More than the human working concept, it looks like it would be difficult for a traditional computer system to handle this amount of data. To study and understand a concept from machine learning and artificial intelligence requires quite a collection of data to predict according to a person's activity. This article explains the roles of faculty and students, as well as the requirements for academic evaluation. Even before the pandemic, most people did not have any idea about the online teaching model. It is only after the disability of conducting direct (offline) classes that people are forced to get into the online world of teaching. Nearly 60% of countries are trying to convert their education systems to such online models, which improve communication between students and teachers and also enable different schemes for students. Big data can be considered as one of the technological revolutions in information technology companies that became popular after the crisis of cloud computing. A support vector machine (SVM) is proposed for analyzing English culture teaching and is compared with the traditional fuzzy logic. The results show the proposed model achieves an accuracy of 98%, which is 5% higher than the existing algorithm.

RevDate: 2022-05-24

Li X (2022)

5G Converged Network Resource Allocation Strategy Based on Reinforcement Learning in Edge Cloud Computing Environment.

Computational intelligence and neuroscience, 2022:6174708.

Aiming at the problem that computing power and resources of Mobile Edge Computing (MEC) servers are difficult to process long-period intensive task data, this study proposes a 5G converged network resource allocation strategy based on reinforcement learning in edge cloud computing environment. n order to solve the problem of insufficient local computing power, the proposed strategy offloads some tasks to the edge of network. Firstly, we build a multi-MEC server and multi-user mobile edge system, and design optimization objectives to minimize the average response time of system tasks and total energy consumption. Then, task offloading and resource allocation process is modeled as Markov decision process. Furthermore, the deep Q-network is used to find the optimal resource allocation scheme. Finally, the proposed strategy is analyzed experimentally based on TensorFlow learning framework. Experimental results show that when the number of users is 110, final energy consumption is about 2500 J, which effectively reduces task delay and improves the utilization of resources.

RevDate: 2022-05-24

Li J (2022)

Study on Integration and Application of Artificial Intelligence and Wireless Network in Piano Music Teaching.

Computational intelligence and neuroscience, 2022:8745833.

Until 2019, most people had never faced the situation that would be their life-changing moment. Most universities are conducting classes for their students with the help of virtual classrooms indicating massive technological growth. However, this development does not take enough time to reach the students and the teaching person. Within five to six months of successful projects, most application producers have launched their official sites to conduct online classes and test ways for students. The introduction of virtual classes is not the only example of technological advancement; cloud computing, artificial intelligence, and deep learning have collaborated to produce appropriate, fine, and less error-prone results in all such fields of teaching. These technological advancements have given way to design models created with the wireless networks that are being made, particularly for music-related courses. The Quality-Learning (Q-Learning) Algorithm (QLA) is a pillar study for improving the implementation of artificial intelligence in music teaching in this research. The proposed algorithm aids in improving the accuracy of music, its frequency, and its wavelength when it passes. The proposed QLA is compared with the existing K-Nearest Neighbour (KNN) algorithm, and the results show that QLA has achieved 99.23% accuracy in intelligent piano music teaching through wireless network mode.

RevDate: 2022-05-23

Lewsey MG, Yi C, Berkowitz O, et al (2022)

scCloudMine: A cloud-based app for visualization, comparison, and exploration of single-cell transcriptomic data.

Plant communications pii:S2590-3462(22)00049-9 [Epub ahead of print].

scCloudMine is a cloud-based application for visualization, comparison, and exploration of single-cell transcriptome data. It does not require an on-site, high-power computing server, installation, or associated expertise and expense. Users upload their own or publicly available scRNA-seq datasets after pre-processing for visualization using a web browser. The data can be viewed in two color modes-Cluster, representing cell identity, and Values, showing levels of expression-and data can be queried using keywords or gene identification number(s). Using the app to compare studies, we determined that some genes frequently used as cell-type markers are in fact study specific. The apparent cell-specific expression of PHO1;H3 differed between GFP-tagging and scRNA-seq studies. Some phosphate transporter genes were induced by protoplasting, but they retained cell specificity, suggesting that cell-specific responses to stress (i.e., protoplasting) can occur. Examination of the cell specificity of hormone response genes revealed that 132 hormone-responsive genes display restricted expression and that the jasmonate response gene TIFY8 is expressed in endodermal cells, in contrast to previous reports. It also appears that JAZ repressors have cell-type-specific functions. These features identified using scCloudMine highlight the need for resources to enable biological researchers to compare their datasets of interest under a variety of parameters. scCloudMine enables researchers to form new hypotheses and perform comparative studies and allows for the easy re-use of data from this emerging technology by a wide variety of users who may not have access or funding for high-performance on-site computing and support.

RevDate: 2022-05-23

Ye Q, Wang M, Meng H, et al (2022)

Efficient Linkable Ring Signature Scheme over NTRU Lattice with Unconditional Anonymity.

Computational intelligence and neuroscience, 2022:8431874.

In cloud and edge computing, senders of data often want to be anonymous, while recipients of data always expect that the data come from a reliable sender and they are not redundant. Linkable ring signature (LRS) can not only protect the anonymity of the signer, but also detect whether two different signatures are signed by the same signer. Today, most lattice-based LRS schemes only satisfy computational anonymity. To the best of our knowledge, only the lattice-based LRS scheme proposed by Torres et al. can achieve unconditional anonymity. But the efficiency of signature generation and verification of the scheme is very low, and the signature length is also relatively long. With the preimage sampling, trapdoor generation, and rejection sampling algorithms, this study proposed an efficient LRS scheme with unconditional anonymity based on the e-NTRU problem under the random oracle model. We implemented our scheme and Torres et al.'s scheme, as well as other four efficient lattice-based LRS schemes. It is shown that under the same security level, compared with Torres et al.'s scheme, the signature generation time, signature verification time, and signature size of our scheme are reduced by about 94.52%, 97.18%, and 58.03%, respectively.

RevDate: 2022-05-23

Mansour RF, Alhumyani H, Khalek SA, et al (2022)

Design of cultural emperor penguin optimizer for energy-efficient resource scheduling in green cloud computing environment.

Cluster computing pii:3608 [Epub ahead of print].

In recent times, energy related issues have become challenging with the increasing size of data centers. Energy related issues problems are becoming more and more serious with the growing size of data centers. Green cloud computing (GCC) becomes a recent computing platform which aimed to handle energy utilization in cloud data centers. Load balancing is generally employed to optimize resource usage, throughput, and delay. Aiming at the reduction of energy utilization at the data centers of GCC, this paper designs an energy efficient resource scheduling using Cultural emperor penguin optimizer (CEPO) algorithm, called EERS-CEPO in GCC environment. The proposed model is aimed to distribute work load amongst several data centers or other resources and thereby avoiding overload of individual resources. The CEPO algorithm is designed based on the fusion of cultural algorithm (CA) and emperor penguin optimizer (EPO), which boosts the exploitation capabilities of EPO algorithm using the CA, shows the novelty of the work. The EERS-CEPO algorithm has derived a fitness function to optimally schedule the resources in data centers, minimize the operational and maintenance cost of the GCC, and thereby decrease the energy utilization and heat generation. To ensure the improvised performance of the EERS-CEPO algorithm, a wide range of experiments is performed and the experimental outcomes highlighted the better performance over the recent state of art techniques.

RevDate: 2022-05-20

Doyen S, NB Dadario (2022)

12 Plagues of AI in Healthcare: A Practical Guide to Current Issues With Using Machine Learning in a Medical Context.

Frontiers in digital health, 4:765406.

The healthcare field has long been promised a number of exciting and powerful applications of Artificial Intelligence (AI) to improve the quality and delivery of health care services. AI techniques, such as machine learning (ML), have proven the ability to model enormous amounts of complex data and biological phenomena in ways only imaginable with human abilities alone. As such, medical professionals, data scientists, and Big Tech companies alike have all invested substantial time, effort, and funding into these technologies with hopes that AI systems will provide rigorous and systematic interpretations of large amounts of data that can be leveraged to augment clinical judgments in real time. However, despite not being newly introduced, AI-based medical devices have more than often been limited in their true clinical impact that was originally promised or that which is likely capable, such as during the current COVID-19 pandemic. There are several common pitfalls for these technologies that if not prospectively managed or adjusted in real-time, will continue to hinder their performance in high stakes environments outside of the lab in which they were created. To address these concerns, we outline and discuss many of the problems that future developers will likely face that contribute to these failures. Specifically, we examine the field under four lenses: approach, data, method and operation. If we continue to prospectively address and manage these concerns with reliable solutions and appropriate system processes in place, then we as a field may further optimize the clinical applicability and adoption of medical based AI technology moving forward.

RevDate: 2022-05-20

Jiang Y, Wu S, Mo Q, et al (2022)

A Cloud-Computing-Based Portable Networked Ground Station System for Microsatellites.

Sensors (Basel, Switzerland), 22(9): pii:s22093569.

Microsatellites have attracted a large number of scholars and engineers because of their portability and distribution characteristics. The ground station suitable for microsatellite service has become an important research topic. In this paper, we propose a networked ground station and verify it on our own microsatellite. The specific networked ground station system consists of multiple ground nodes. They can work together to complete data transmission tasks with higher efficiency. After describing our microsatellite project, a reasonable distribution of ground nodes is given. A cloud computing model is used to realize the coordination of multiple ground nodes. An adaptive communication system between satellites and ground stations is used to increase link efficiency. Extensive on-orbit experiments were used to validate our design. The experimental results show that our networked ground station has excellent performance in data transmission capability. Finally, the specific cloud-computing-based ground station network successfully completes our satellite mission.

RevDate: 2022-05-20

Zhang J, Li M, Zheng X, et al (2022)

A Time-Driven Cloudlet Placement Strategy for Workflow Applications in Wireless Metropolitan Area Networks.

Sensors (Basel, Switzerland), 22(9): pii:s22093422.

With the rapid development of mobile technology, mobile applications have increasing requirements for computational resources, and mobile devices can no longer meet these requirements. Mobile edge computing (MEC) has emerged in this context and has brought innovation into the working mode of traditional cloud computing. By provisioning edge server placement, the computing power of the cloud center is distributed to the edge of the network. The abundant computational resources of edge servers compensate for the lack of mobile devices and shorten the communication delay between servers and users. Constituting a specific form of edge servers, cloudlets have been widely studied within academia and industry in recent years. However, existing studies have mainly focused on computation offloading for general computing tasks under fixed cloudlet placement positions. They ignored the impact on computation offloading results from cloudlet placement positions and data dependencies among mobile application components. In this paper, we study the cloudlet placement problem based on workflow applications (WAs) in wireless metropolitan area networks (WMANs). We devise a cloudlet placement strategy based on a particle swarm optimization algorithm using genetic algorithm operators with the encoding library updating mode (PGEL), which enables the cloudlet to be placed in appropriate positions. The simulation results show that the proposed strategy can obtain a near-optimal cloudlet placement scheme. Compared with other classic algorithms, this algorithm can reduce the execution time of WAs by 15.04-44.99%.

RevDate: 2022-05-20

Barbeau M, Garcia-Alfaro J, E Kranakis (2022)

Research Trends in Collaborative Drones.

Sensors (Basel, Switzerland), 22(9): pii:s22093321.

The last decade has seen an explosion of interest in drones-introducing new networking technologies, such as 5G wireless connectivity and cloud computing. The resulting advancements in communication capabilities are already expanding the ubiquitous role of drones as primary solution enablers, from search and rescue missions to information gathering and parcel delivery. Their numerous applications encompass all aspects of everyday life. Our focus is on networked and collaborative drones. The available research literature on this topic is vast. No single survey article could do justice to all critical issues. Our goal in this article is not to cover everything and include everybody but rather to offer a personal perspective on a few selected research topics that might lead to fruitful future investigations that could play an essential role in developing drone technologies. The topics we address include distributed computing with drones for the management of anonymity, countering threats posed by drones, target recognition, navigation under uncertainty, risk avoidance, and cellular technologies. Our approach is selective. Every topic includes an explanation of the problem, a discussion of a potential research methodology, and ideas for future research.

RevDate: 2022-05-19

Li T, Zhao H, Tao Y, et al (2022)

Power Intelligent Terminal Intrusion Detection Based on Deep Learning and Cloud Computing.

Computational intelligence and neuroscience, 2022:1415713.

Numerous internal and external intrusion attacks have appeared one after another, which has become a major problem affecting the normal operation of the power system. The power system is the infrastructure of the national economy, ensuring that the information security of its network not only is an aspect of computer information security but also must consider high-standard security requirements. This paper analyzes the intrusion threat brought by the power information network and conducts in-depth research and investigation combined with the intrusion detection technology of the power information network. It analyzes the structure of the power knowledge network and cloud computing through deep learning-based methods and provides a network interference detection model. The model combines the methods of abuse detection and anomaly detection, which solves the problem that the abuse analysis model does not detect new attack variants. At the same time, for big data network data retrieval, it retrieves and analyzes data flow quickly and accurately with the help of deep learning of data components. It uses a fuzzy integral method to optimize the accuracy of power information network intrusion prediction, and the accuracy reaches 98.11%, with an increase of 0.6%.

RevDate: 2022-05-19

Aloraini T, Aljouie A, Alniwaider R, et al (2022)

The variant artificial intelligence easy scoring (VARIES) system.

Computers in biology and medicine, 145:105492.

PURPOSE: Medical artificial intelligence (MAI) is artificial intelligence (AI) applied to the healthcare field. AI can be applied to many different aspects of genetics, such as variant classification. With little or no prior experience in AI coding, we share our experience with variant classification using the Variant Artificial Intelligence Easy Scoring (VARIES), an open-access platform, and the Automatic Machine Learning (AutoML) of the Google Cloud Platform.

METHODS: We investigated exome sequencing data from a sample of 1410 individuals. The majority (80%) were used for training and 20% for testing. The user-friendly Google Cloud Platform was used to create the VARIES model, and the TRIPOD checklist to develop and validate the prediction model for the development of the VARIES system.

RESULTS: The learning rate of the training dataset reached optimal results at an early stage of iteration, with a loss value near zero in approximately 4 min. For the testing dataset, the results for F1 (micro average) was 0.64, F1 (macro average) 0.34, micro-average area under the curve AUC (one-over-rest) 0.81 and the macro-average AUC (one-over-rest) 0.73. The overall performance characteristics of the VARIES model suggest the classifier has a high predictive ability.

CONCLUSION: We present a systematic guideline to create a genomic AI prediction tool with high predictive power, using a graphical user interface provided by Google Cloud Platform, with no prior experience in creating the software programs required.

RevDate: 2022-05-18

Yassine A, MS Hossain (2022)

COVID-19 Networking Demand: An Auction-Based Mechanism for Automated Selection of Edge Computing Services.

IEEE transactions on network science and engineering, 9(1):308-318.

Network and cloud service providers are facing an unprecedented challenge to meet the demand of end-users during the COVID-19 pandemic. Currently, billions of people around the world are ordered to stay at home and use remote connection technologies to prevent the spread of the disease. The COVID-19 crisis brought a new reality to network service providers that will eventually accelerate the deployment of edge computing resources to attract the massive influx of users' traffic. The user can elect to procure its resource needs from any edge computing provider based on a variety of attributes such as price and quality. The main challenge for the user is how to choose between the price and multiple quality of service deals when such offerings are changing continually. This problem falls under multi-attribute decision-making. This paper investigates and proposes a novel auction mechanism by which network service brokers would be able to automate the selection of edge computing offers to support their end-users. We also propose a multi-attribute decision-making model that allows the broker to maximize its utility when several bids from edge-network providers are present. The evaluation and experimentation show the practicality and robustness of the proposed model.

RevDate: 2022-05-17

Wallace G, Polcyn S, Brooks PP, et al (2022)

RT-Cloud: A Cloud-based Software Framework to Simplify and Standardize Real-Time fMRI.

NeuroImage pii:S1053-8119(22)00414-1 [Epub ahead of print].

Real-time fMRI (RT-fMRI) neurofeedback has been shown to be effective in treating neuropsychiatric disorders and holds tremendous promise for future breakthroughs, both with regard to basic science and clinical applications. However, the prevalence of its use has been hampered by computing hardware requirements, the complexity of setting up and running an experiment, and a lack of standards that would foster collaboration. To address these issues, we have developed RT-Cloud (https://github.com/brainiak/rt-cloud), a flexible, cloud-based, open-source Python software package for the execution of RT-fMRI experiments. RT-Cloud uses standardized data formats and adaptable processing streams to support and expand open science in RT-fMRI research and applications. Cloud computing is a key enabling technology for advancing RT-fMRI because it eliminates the need for on-premise technical expertise and high-performance computing; this allows installation, configuration, and maintenance to be automated and done remotely. Furthermore, the scalability of cloud computing makes it easier to deploy computationally-demanding multivariate analyses in real time. In this paper, we describe how RT-Cloud has been integrated with open standards, including the Brain Imaging Data Structure (BIDS) standard and the OpenNeuro database, how it has been applied thus far, and our plans for further development and deployment of RT-Cloud in the coming years.

RevDate: 2022-05-17

Ahmad S, Mehfuz S, Mebarek-Oudina F, et al (2022)

RSM analysis based cloud access security broker: a systematic literature review.

Cluster computing pii:3598 [Epub ahead of print].

A Cloud Access Security Broker (CASB) is a security enforcement point or cloud-based software that is placed between cloud service users and cloud applications of cloud computing (CC) which is used to run the dimensionality, heterogeneity, and ambiguity correlated with cloud services. They permit the organization to amplify the reach of their security approaches past their claim framework to third-party computer programs and storage. In contrast to other systematic literature reviews (SLR), this one is directed at the client setting. To identify and evaluate methods to understand CASB, the SLR discusses the literature, citing a comprehension of the state-of-the-art and innovative characterization to describe. An SLR was performed to compile CASB related experiments and analyze how CASBs are designed and formed. These studies are then analyzed from different contexts, like motivation, usefulness, building approach, and decision method. The SLR has discussed the contrasts present between the studies and implementations, with planning accomplishments conducted with combinations of market-based courses of action, simulation tools, middleware's, etc. Search words with the keywords, which were extracted from the Research Questions (RQs), were utilized to recognize the essential consideration from the journal papers, conference papers, workshops, and symposiums. This SLR has distinguished 20 particular studies distributed from 2011 to 2021. Chosen studies were evaluated concurring to the defined RQs for their eminence and scope to particular CASB in this way recognizing a few gaps within the literature. Unlike other studies, this one concentrates on the customer's viewpoint. The survey uses a systematic analysis of the literature to discover and classify techniques for realizing CASB, resulting in a comprehensive grasp of the state-of-the-art and a novel taxonomy to describe CASBs. To assemble studies relating to CASB and investigate how CASB are engineered, a systematic literature review was done. These investigations are then evaluated from a variety of angles, including motivation, functionality, engineering approach, and methodology. Engineering efforts were directed at a combination of "market-based solutions", "middlewares", "toolkits", "algorithms", "semantic frameworks", and "conceptual frameworks", according to the study, which noted disparities in the studies' implementations. For further understanding, the different independent parameters influencing the CASB are studied using PCA (Principal Component Analysis). The outcome of their analysis was the identification of five parameters influencing the PCA analysis. The experimental results were used as input for Research Surface Methodology (RSM) to obtain an empirical model. For this, five-level coding was employed for developing the model and considered three dependent parameters and four center values. For more understanding of these independent variables' influence, on the CASB study, RSM analysis was employed. It was observed from the CCD (Central Composite Design) model that the actual values show significant influence with R2 = 0.90. This wide investigation reveals that CASB is still in a formative state. Even though vital advancement has been carried out in this zone, obvious challenges stay to be tended to, which have been highlighted in this paper.

RevDate: 2022-05-16

Wimberly MC, Nekorchuk DM, RR Kankanala (2022)

Cloud-based applications for accessing satellite Earth observations to support malaria early warning.

Scientific data, 9(1):208.

Malaria epidemics can be triggered by fluctuations in temperature and precipitation that influence vector mosquitoes and the malaria parasite. Identifying and monitoring environmental risk factors can thus provide early warning of future outbreaks. Satellite Earth observations provide relevant measurements, but obtaining these data requires substantial expertise, computational resources, and internet bandwidth. To support malaria forecasting in Ethiopia, we developed software for Retrieving Environmental Analytics for Climate and Health (REACH). REACH is a cloud-based application for accessing data on land surface temperature, spectral indices, and precipitation using the Google Earth Engine (GEE) platform. REACH can be implemented using the GEE code editor and JavaScript API, as a standalone web app, or as package with the Python API. Users provide a date range and data for 852 districts in Ethiopia are automatically summarized and downloaded as tables. REACH was successfully used in Ethiopia to support a pilot malaria early warning project in the Amhara region. The software can be extended to new locations and modified to access other environmental datasets through GEE.

RevDate: 2022-05-14

Tang S, Chen R, Lin M, et al (2022)

Accelerating AutoDock Vina with GPUs.

Molecules (Basel, Switzerland), 27(9): pii:molecules27093041.

AutoDock Vina is one of the most popular molecular docking tools. In the latest benchmark CASF-2016 for comparative assessment of scoring functions, AutoDock Vina won the best docking power among all the docking tools. Modern drug discovery is facing a common scenario of large virtual screening of drug hits from huge compound databases. Due to the seriality characteristic of the AutoDock Vina algorithm, there is no successful report on its parallel acceleration with GPUs. Current acceleration of AutoDock Vina typically relies on the stack of computing power as well as the allocation of resource and tasks, such as the VirtualFlow platform. The vast resource expenditure and the high access threshold of users will greatly limit the popularity of AutoDock Vina and the flexibility of its usage in modern drug discovery. In this work, we proposed a new method, Vina-GPU, for accelerating AutoDock Vina with GPUs, which is greatly needed for reducing the investment for large virtual screens and also for wider application in large-scale virtual screening on personal computers, station servers or cloud computing, etc. Our proposed method is based on a modified Monte Carlo using simulating annealing AI algorithm. It greatly raises the number of initial random conformations and reduces the search depth of each thread. Moreover, a classic optimizer named BFGS is adopted to optimize the ligand conformations during the docking progress, before a heterogeneous OpenCL implementation was developed to realize its parallel acceleration leveraging thousands of GPU cores. Large benchmark tests show that Vina-GPU reaches an average of 21-fold and a maximum of 50-fold docking acceleration against the original AutoDock Vina while ensuring their comparable docking accuracy, indicating its potential for pushing the popularization of AutoDock Vina in large virtual screens.

RevDate: 2022-05-13

Porter SJ, DW Hook (2022)

Connecting Scientometrics: Dimensions as a Route to Broadening Context for Analyses.

Frontiers in research metrics and analytics, 7:835139.

Modern cloud-based data infrastructures open new vistas for the deployment of scientometric data into the hands of practitioners. These infrastructures lower barriers to entry by making data more available and compute capacity more affordable. In addition, if data are prepared appropriately, with unique identifiers, it is possible to connect many different types of data. Bringing broader world data into the hands of practitioners (policymakers, strategists, and others) who use scientometrics as a tool can extend their capabilities. These ideas are explored through connecting Dimensions and World Bank data on Google BigQuery to study international collaboration between countries of different economic classification.

RevDate: 2022-05-13

Luo C, Wang S, Li T, et al (2022)

Large-Scale Meta-Heuristic Feature Selection Based on BPSO Assisted Rough Hypercuboid Approach.

IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].

The selection of prominent features for building more compact and efficient models is an important data preprocessing task in the field of data mining. The rough hypercuboid approach is an emerging technique that can be applied to eliminate irrelevant and redundant features, especially for the inexactness problem in approximate numerical classification. By integrating the meta-heuristic-based evolutionary search technique, a novel global search method for numerical feature selection is proposed in this article based on the hybridization of the rough hypercuboid approach and binary particle swarm optimization (BPSO) algorithm, namely RH-BPSO. To further alleviate the issue of high computational cost when processing large-scale datasets, parallelization approaches for calculating the hybrid feature evaluation criteria are presented by decomposing and recombining hypercuboid equivalence partition matrix via horizontal data partitioning. A distributed meta-heuristic optimized rough hypercuboid feature selection (DiRH-BPSO) algorithm is thus developed and embedded in the Apache Spark cloud computing model. Extensive experimental results indicate that RH-BPSO is promising and can significantly outperform the other representative feature selection algorithms in terms of classification accuracy, the cardinality of the selected feature subset, and execution efficiency. Moreover, experiments on distributed-memory multicore clusters show that DiRH-BPSO is significantly faster than its sequential counterpart and is perfectly capable of completing large-scale feature selection tasks that fail on a single node due to memory constraints. Parallel scalability and extensibility analysis also demonstrate that DiRH-BPSO could scale out and extend well with the growth of computational nodes and the volume of data.

RevDate: 2022-05-13

Jiang F, Deng M, Long Y, et al (2022)

Spatial Pattern and Dynamic Change of Vegetation Greenness From 2001 to 2020 in Tibet, China.

Frontiers in plant science, 13:892625.

Due to the cold climate and dramatically undulating altitude, the identification of dynamic vegetation trends and main drivers is essential to maintain the ecological balance in Tibet. The normalized difference vegetation index (NDVI), as the most commonly used greenness index, can effectively evaluate vegetation health and spatial patterns. MODIS-NDVI (Moderate-resolution Imaging Spectroradiometer-NDVI) data for Tibet from 2001 to 2020 were obtained and preprocessed on the Google Earth Engine (GEE) cloud platform. The Theil-Sen median method and Mann-Kendall test method were employed to investigate dynamic NDVI changes, and the Hurst exponent was used to predict future vegetation trends. In addition, the main drivers of NDVI changes were analyzed. The results indicated that (1) the vegetation NDVI in Tibet significantly increased from 2001 to 2020, and the annual average NDVI value fluctuated between 0.31 and 0.34 at an increase rate of 0.0007 year-1; (2) the vegetation improvement area accounted for the largest share of the study area at 56.6%, followed by stable unchanged and degraded areas, with proportions of 27.5 and 15.9%, respectively. The overall variation coefficient of the NDVI in Tibet was low, with a mean value of 0.13; (3) The mean value of the Hurst exponent was 0.53, and the area of continuously improving regions accounted for 41.2% of the study area, indicating that the vegetation change trend was continuous in most areas; (4) The NDVI in Tibet indicated a high degree of spatial agglomeration. However, there existed obvious differences in the spatial distribution of NDVI aggregation areas, and the aggregation types mainly included the high-high and low-low types; and (5) Precipitation and population growth significantly contributed to vegetation cover improvement in western Tibet. In addition, the use of the GEE to obtain remote sensing data combined with time-series data analysis provides the potential to quickly obtain large-scale vegetation change trends.

RevDate: 2022-05-10

Lee SH, Park J, Yang K, et al (2022)

Accuracy of Cloud-Based Speech Recognition Open Application Programming Interface for Medical Terms of Korean.

Journal of Korean medical science, 37(18):e144 pii:37.e144.

BACKGROUND: There are limited data on the accuracy of cloud-based speech recognition (SR) open application programming interfaces (APIs) for medical terminology. This study aimed to evaluate the medical term recognition accuracy of current available cloud-based SR open APIs in Korean.

METHODS: We analyzed the SR accuracy of currently available cloud-based SR open APIs using real doctor-patient conversation recordings collected from an outpatient clinic at a large tertiary medical center in Korea. For each original and SR transcription, we analyzed the accuracy rate of each cloud-based SR open API (i.e., the number of medical terms in the SR transcription per number of medical terms in the original transcription).

RESULTS: A total of 112 doctor-patient conversation recordings were converted with three cloud-based SR open APIs (Naver Clova SR from Naver Corporation; Google Speech-to-Text from Alphabet Inc.; and Amazon Transcribe from Amazon), and each transcription was compared. Naver Clova SR (75.1%) showed the highest accuracy with the recognition of medical terms compared to the other open APIs (Google Speech-to-Text, 50.9%, P < 0.001; Amazon Transcribe, 57.9%, P < 0.001), and Amazon Transcribe demonstrated higher recognition accuracy compared to Google Speech-to-Text (P < 0.001). In the sub-analysis, Naver Clova SR showed the highest accuracy in all areas according to word classes, but the accuracy of words longer than five characters showed no statistical differences (Naver Clova SR, 52.6%; Google Speech-to-Text, 56.3%; Amazon Transcribe, 36.6%).

CONCLUSION: Among three current cloud-based SR open APIs, Naver Clova SR which manufactured by Korean company showed highest accuracy of medical terms in Korean, compared to Google Speech-to-Text and Amazon Transcribe. Although limitations are existing in the recognition of medical terminology, there is a lot of rooms for improvement of this promising technology by combining strengths of each SR engines.

RevDate: 2022-05-10

Chai M (2022)

Design of Rural Human Resource Management Platform Integrating IoT and Cloud Computing.

Computational intelligence and neuroscience, 2022:4133048.

With the advent of the Internet of Things era, these hot technologies such as distributed, parallel computing, network storage, and load balancing can provide a good application foundation for the Internet of Things, enabling real-time dynamic management and intelligent analysis of hundreds of millions of items in the Internet of Things to be possible. The Internet of Things has changed from a concept to a reality, quickly reaching every corner of society. On the other hand, with the enhancement of the mobility of social talents, the file management of the talent service center is becoming more and more difficult. However, the traditional management methods of human resources files have problems such as poor resource sharing, asymmetric resources, and heterogeneous information sharing, which can no longer meet the needs of both the supply and demand sides of human resources with diversified and multiple organizational structures. Cloud computing technology has powerful data collection functions, self-service functions, and unified resource scheduling functions. Introducing it into the human resources file management system can greatly improve management efficiency. In order to carry out information management of rural human resources, this paper develops a rural human resources management system based on the Internet of Things. This paper introduces the design scheme of rural human resource management platform based on Internet of Things technology and cloud computing technology. The design of this system mainly includes organization setting, post planning, personnel management, salary management, insurance benefits, recruitment and selection, training management, performance appraisal management, labor contract management, comprehensive inquiry, rules and regulations, employee self-help, system setting, and system management function modules. The research results show that the rural human resource management system based on cloud computing can provide a complete human resource management solution for the vast rural areas. It can only purchase services, save a lot of development and maintenance costs, and also customize functions, so as to better meet the needs of use.

RevDate: 2022-05-09

Xu H, Yu W, Griffith D, et al (2018)

A Survey on Industrial Internet of Things: A Cyber-Physical Systems Perspective.

IEEE access : practical innovations, open solutions, 6:.

The vision of Industry 4.0, otherwise known as the fourth industrial revolution, is the integration of massively deployed smart computing and network technologies in industrial production and manufacturing settings for the purposes of automation, reliability, and control, implicating the development of an Industrial Internet of Things (I-IoT). Specifically, I-IoT is devoted to adopting the Internet of Things (IoT) to enable the interconnection of anything, anywhere, and at anytime in the manufacturing system context to improve the productivity, efficiency, safety and intelligence. As an emerging technology, I-IoT has distinct properties and requirements that distinguish it from consumer IoT, including the unique types of smart devices incorporated, network technologies and quality of service requirements, and strict needs of command and control. To more clearly understand the complexities of I-IoT and its distinct needs, and to present a unified assessment of the technology from a systems perspective, in this paper we comprehensively survey the body of existing research on I-IoT. Particularly, we first present the I-IoT architecture, I-IoT applications (i.e., factory automation (FA) and process automation (PA)) and their characteristics. We then consider existing research efforts from the three key systems aspects of control, networking and computing. Regarding control, we first categorize industrial control systems and then present recent and relevant research efforts. Next, considering networking, we propose a three-dimensional framework to explore the existing research space, and investigate the adoption of some representative networking technologies, including 5G, machine-to-machine (M2M) communication, and software defined networking (SDN). Similarly, concerning computing, we again propose a second three-dimensional framework that explores the problem space of computing in I-IoT, and investigate the cloud, edge, and hybrid cloud and edge computing platforms. Finally, we outline particular challenges and future research needs in control, networking, and computing systems, as well as for the adoption of machine learning, in an I-IoT context.

RevDate: 2022-05-09

Munjal K, R Bhatia (2022)

A systematic review of homomorphic encryption and its contributions in healthcare industry.

Complex & intelligent systems pii:756 [Epub ahead of print].

Cloud computing and cloud storage have contributed to a big shift in data processing and its use. Availability and accessibility of resources with the reduction of substantial work is one of the main reasons for the cloud revolution. With this cloud computing revolution, outsourcing applications are in great demand. The client uses the service by uploading their data to the cloud and finally gets the result by processing it. It benefits users greatly, but it also exposes sensitive data to third-party service providers. In the healthcare industry, patient health records are digital records of a patient's medical history kept by hospitals or health care providers. Patient health records are stored in data centers for storage and processing. Before doing computations on data, traditional encryption techniques decrypt the data in their original form. As a result, sensitive medical information is lost. Homomorphic encryption can protect sensitive information by allowing data to be processed in an encrypted form such that only encrypted data is accessible to service providers. In this paper, an attempt is made to present a systematic review of homomorphic cryptosystems with its categorization and evolution over time. In addition, this paper also includes a review of homomorphic cryptosystem contributions in healthcare.

RevDate: 2022-05-09

Kumar V, Mahmoud MS, Alkhayyat A, et al (2022)

RAPCHI: Robust authentication protocol for IoMT-based cloud-healthcare infrastructure.

The Journal of supercomputing pii:4513 [Epub ahead of print].

With the fast growth of technologies like cloud computing, big data, the Internet of Things, artificial intelligence, and cyber-physical systems, the demand for data security and privacy in communication networks is growing by the day. Patient and doctor connect securely through the Internet utilizing the Internet of medical devices in cloud-healthcare infrastructure (CHI). In addition, the doctor offers to patients online treatment. Unfortunately, hackers are gaining access to data at an alarming pace. In 2019, 41.4 million times, healthcare systems were compromised by attackers. In this context, we provide a secure and lightweight authentication scheme (RAPCHI) for CHI employing Internet of medical Things (IoMT) during pandemic based on cryptographic primitives. The suggested framework is more secure than existing frameworks and is resistant to a wide range of security threats. The paper also explains the random oracle model (ROM) and uses two alternative approaches to validate the formal security analysis of RAPCHI. Further, the paper shows that RAPCHI is safe against man-in-the-middle and reply attacks using the simulation programme AVISPA. In addition, the paper compares RAPCHI to related frameworks and discovers that it is relatively light in terms of computation and communication. These findings demonstrate that the proposed paradigm is suitable for use in real-world scenarios.

RevDate: 2022-05-09

Gao J (2022)

Network Intrusion Detection Method Combining CNN and BiLSTM in Cloud Computing Environment.

Computational intelligence and neuroscience, 2022:7272479.

A network intrusion detection method combining CNN and BiLSTM network is proposed. First, the KDD CUP 99 data set is preprocessed by using data extraction algorithm. The data set is transformed into image data set by data cleaning, data extraction, and data mapping; Second, CNN is used to extract the parallel local features of attribute information, and BiLSTM is used to extract the features of long-distance-dependent information, so as to fully consider the influence between the front and back attribute information, and attention mechanism is introduced to improve the classification accuracy. Finally, C5.0 decision tree and CNN BiLSTM deep learning model are combined to skip the design feature selection and directly use deep learning model to learn the representational features of high-dimensional data. Experimental results show that, compared with the methods based on AE-AlexNet and SGM-CNN, the network intrusion detection effect of this method is better, the average accuracy can be improved to 95.50%, the false-positive rate can be reduced to 4.24%, and the false positive rate can be reduced to 6.66%. The proposed method can significantly improve the performance of network intrusion detection system.

RevDate: 2022-05-09

Ahmed K, M Saini (2022)

FCML-gait: fog computing and machine learning inspired human identity and gender recognition using gait sequences.

Signal, image and video processing pii:2217 [Epub ahead of print].

Security threats are always there if the human intruders are not identified and recognized well in time in highly security-sensitive environments like the military, airports, parliament houses, and banks. Fog computing and machine learning algorithms on Gait sequences can prove to be better for restricting intruders promptly. Gait recognition provides the ability to observe an individual unobtrusively, without any direct cooperation or interaction from the people, making it very attractive than other biometric recognition techniques. In this paper, a Fog Computing and Machine Learning Inspired Human Identity and Gender Recognition using Gait Sequences (FCML-Gait) are proposed. Internet of things (IoT) devices and video capturing sensors are used to acquire data. Frames are clustered using the affinity propagation (AP) clustering technique into several clusters, and cluster-based averaged gait image(C-AGI) feature is determined for each cluster. For training and testing of datasets, sparse reconstruction-based metric learning (SRML) and Speeded Up Robust Features (SURF) with support vector machine (SVM) are applied on benchmark gait database ADSC-AWD having 80 subjects of 20 different individuals in the Fog Layer to improve the processing. The performance metrics, for instance, accuracy, precision, recall, F-measure, C-time, and R-time have been measured, and a comparative evaluation of the projected method with the existing SRML technique has been provided in which the proposed FCML-Gait outperforms and attains the highest accuracy of 95.49%.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )