ACM DL

ACM

Computing Surveys (CSUR)

Menu
Latest Articles

A Critical Review of Proactive Detection of Driver Stress Levels Based on Multimodal Measurements

Stress is a major concern in daily life, as it imposes significant and growing health and economic... (more)

A Survey on Gait Recognition

Recognizing people by their gait has become more and more popular nowadays due to the following reasons. First, gait recognition can work well remotely. Second, gait recognition can be done from low-resolution videos and with simple instrumentation. Third, gait recognition can be done without the cooperation of individuals. Fourth, gait recognition... (more)

A Survey on Game-Theoretic Approaches for Intrusion Detection and Response Optimization

Intrusion Detection Systems (IDS) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or... (more)

Is Multimedia Multisensorial? - A Review of Mulsemedia Systems

Mulsemedia—multiple sensorial media—makes possible the inclusion of layered sensory stimulation and interaction through multiple... (more)

A Survey on Deep Learning: Algorithms, Techniques, and Applications

The field of machine learning is witnessing its golden era as deep learning slowly becomes the leader in this domain. Deep learning uses multiple layers to represent the abstractions of data to build computational models. Some key enabler deep learning algorithms such as generative adversarial... (more)

A Survey of Methods for Explaining Black Box Models

In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the... (more)

Security of Distance-Bounding: A Survey

Distance-bounding protocols allow a verifier to both authenticate a prover and evaluate whether the latter is located in his vicinity. These protocols are of particular interest in contactless systems, e.g., electronic payment or access control systems, which are vulnerable to distance-based frauds. This survey analyzes and compares in a unified manner many existing distance-bounding protocols... (more)

Triclustering Algorithms for Three-Dimensional Data Analysis: A Comprehensive Survey

Three-dimensional data are increasingly prevalent across biomedical and social domains. Notable examples are gene-sample-time,... (more)

A Survey on Compiler Autotuning using Machine Learning

Since the mid-1990s, researchers have been trying to use machine-learning-based approaches to solve a number of different compiler optimization... (more)

A Survey on Self-Adaptive Security for Large-scale Open Environments

Contemporary software systems operate in heterogeneous, dynamic, and distributed environments, where security needs change at runtime. The security... (more)

NEWS

About CSUR

ACM Computing Surveys (CSUR) publishes comprehensive, readable tutorials and survey papers that give guided tours through the literature and explain topics to those who seek to learn the basics of areas outside their specialties. These carefully planned and presented introductions are also an excellent way for professionals to develop perspectives on, and identify trends in complex technologies. Recent issues have covered image understanding, software reusability, and object and relational database topics. 

read more
Host-based Intrusion Detection System with System Calls: Review and Future Trends

The increasing amount of contemporary Linux applications in a data center often generate large quantity of real-time system call traces, which are not well-suited for traditional host-based intrusion detection system deployed on each single host. Training and testing data mining models with system call traces on a single host that has static computing and storage capacity is time-consuming and intermediate datasets are not capable to be handled. It is cumbersome for the maintenance and update of HIDS installed on each physical or virtual host and distributed system call analysis can hardly be performed to detect complicated and distributed attacks among multiple hosts. This paper provides a review of the development of system call based HIDS as well as future research directions. Algorithms and techniques relevant to system call based HIDS are evaluated, including feature extraction methods and various data mining algorithms. Modern application areas for system call based HIDS are summarized and related works are investigated. The HIDS dataset issues including current available datasets with system calls and approaches for researchers to generate their own datasets are listed. Potential HIDS solutions in regard to cloud computing and big data tools are provided in this paper.

Countermeasures Against Worms Spreading: A New Challenge for Vehicular Networks

VANET, as an essential component of the intelligent transport system, attracts more and more attention. As multifunction nodes being capable of transporting, sensing, information processing, wireless communication, vehicular nodes are more vulnerable to the worm than conventional hosts. The worm spreading on vehicular networks not only seriously threatens the security of vehicular ad hoc networks but also imperils the onboard passengers and public safety. It is indispensable to study and analyze the characteristics of worm propagating on VANETs. In this paper, we first briefly introduced the computer worms and then surveyed the recent literature on the topic of the worm spreading on VANETs. The models developed for VENATs worm spreading and several counter strategies are compared and discussed.

A Survey on Brain Biometrics

Brainwaves, which reflect brain electrical activity and have been studied for a long time in the domain of cognitive neuroscience, have recently been proposed as a promising biometric approach due to their unique advantages of confidentiality, resistance to impersonation, sensitivity to emotional and mental state, continuous nature, and cancellability. Recent research efforts have explored many possible ways of using brain biometrics and demonstrated that they are a promising candidate for more robust and secure person identification and authentication. Although existing research on brain biometrics have obtained some intriguing insights, a lot of work is still necessary to achieve a reliable ready-to-deploy brain biometric system. This article aims to provide a detailed survey of the current literature and outline the scientific work conducted on brain biometric systems. It provides an up-to-date review of state-of-the-art acquisition, collection, processing, and analysis of brainwave signals, publicly available databases, feature extraction and selection, and classifiers. Furthermore, it highlights some of the emerging open research problems for brain biometrics, including multimodality, security, permanence, and stability.

A Survey of On-Chip Optical Interconnects

Numerous challenges present themselves when scaling traditional on-chip electrical networks to large manycore processors. Some of these challenges include high latency, limitations on bandwidth, and power consumption. Researchers have, therefore, been looking for alternatives with the result that on-chip nanophotonics has emerged as a strong substitute for traditional electrical NoCs. As of 2016, on-chip optical networks have moved out of textbooks and found commercial applicability in short-haul networks such as links between servers on the same rack or between two components on the motherboard. It is widely acknowledged that in the near future, optical technologies will move beyond research prototypes and find their way into the chip. Optical networks already feature in the roadmaps of major processor manufacturers and most on-chip optical devices are beginning to show signs of maturity. This paper is designed to provide a survey of on-chip optical technologies covering the basic physics, optical devices, popular architectures, power reduction techniques, and applications. The aim of this paper is to start from the fundamental concepts, and move on to the latest in the field of on-chip optical interconnects.

Towards the Decentralised Cloud: Survey on Approaches and Challenges for Mobile, Ad-Hoc and Edge Computing

Cloud emerged as a centralised approach that made ``infinite`` computing resources offered on demand. Nevertheless, the ever increasing computing capacities available at smart connected things and devices calls towards the decentralisation of computing in order to avoid unnecessary latencies and fully exploit available computing capacities at the edges of the network. While these decentralised Cloud models are a significant breakthrough from Cloud perspective, they build their roots on existing research areas such as Mobile Cloud Computing, Mobile Ad-hoc Computing and Edge computing. This work analyses these existing works so to assess their role in decentralised cloud and future computing development.

Parallel Computing of Support Vector Machines: A Survey

Parallel computing is important for improving the performance of support vector machines regarding large-scale problems. In this paper, a review of parallel implementations of support vector machines is presented and categorized into parallel decomposition, parallel incremental, the cascade, parallel IPM, parallel kernel computations, parallel distributed algorithms, and parallel optimizations. All approaches have more or less four focus lines, memory, speedup, scalability, and accuracy. The review shows that parallel decomposition and parallel kernel computations along with map-reduce parallel model are the dominant approaches among others. Map-reduce, parallel incremental and parallel combination approaches are the necessary approaches to solving very large-scale problems.

A systematic review for smart city data analytics

Smart cities (SC) are becoming highly sophisticated ecosystems at which innovative solutions and smart services are being deployed. These ecosystems consider SC as data production and sharing engines, setting new challenges for building effective SC architectures and novel services. The aim of this paper is to connect the pieces among Data Science and SC domains, with a systematic literature review which identifies the core topics, services, and methods applied in SC data monitoring. The survey focuses on data harvesting and data mining processes over repeated SC data cycles. A survey protocol is followed to reach both quantitative and semantically important entities. The review results generate useful taxonomies for data scientists in the SC context, which offers clear guidelines for corresponding future works. In specific, a taxonomy is proposed for each of the main SC data entities, namely the D Taxonomy for the data production, the M Taxonomy for data analytics methods, and the S Taxonomy for smart services. Each of these taxonomies clearly places entities in a classification which is beneficial for multiple stakeholders and for multiple domains in urban smartness targeting. Such indicative scenarios are outlined and conclusions are quite promising for systemizing.

Gait-based Person Re-identification: a Survey

The way people walk is a strong correlate of their identity. Several studies have shown that both humans and machines can recognize individuals just by their gait, given that proper measurements of the observed motion patterns are available. For surveillance applications, gait is also attractive because it does not require active collaboration from users and is hard to fake. However, the acquisition of good quality measures of a persons motion patterns in unconstrained environments (e.g., in person re-identification applications) has proved very challenging in practice. Existing technology (video cameras) suffer from changes in viewpoint, daylight, clothing, wear accessories, and other variations in the persons appearance. Novel 3D sensors are bringing new promises to the field, but still many research issues are open. This paper presents a survey of the work done in gait analysis for re-identification in the last decade, looking at the main approaches, datasets and evaluation methodologies. We identify several relevant dimensions of the problem and provide a taxonomic analysis of the current state-of-the-art. Finally, we discuss the levels of performance achievable with the current technology and give a perspective of the most challenging and promising directions of research for the future.

Relation Extraction Using Distant Supervision: a Survey

Relation extraction is a subtask of information extraction where semantic relationships are extracted from natural language text and then classified. In essence, it allows to acquire structured knowledge from unstructured text. In this work, we present a survey of relation extraction methods that leverage pre-existing structured or semi-structured data to guide the extraction process. We introduce a taxonomy of existing methods and describe distant supervision approaches in detail. We describe in addition the evaluation methodologies and the datasets commonly used for quality assessment. Finally, we give a high-level outlook on the field, highlighting open problems as well as the most promising research directions.

Deep Learning based Recommender System: A Survey and New Perspectives

With the ever-growing volume, complexity and dynamicity of online information, recommender system has been an effective key solution to overcome such information overload. In recent years, deep learning's revolutionary advances in speech recognition, image analysis and natural language processing have gained significant attention. Meanwhile, recent studies also demonstrate its effectiveness in coping with information retrieval and recommendation tasks. Applying deep learning techniques into recommender system has been gaining momentum due to its state-of-the-art performances and high-quality recommendations. In contrast to traditional recommendation models, deep learning provides a better understanding of user's demands, item's characteristics and historical interactions between them. This article aims to provide a comprehensive review of recent research efforts on deep learning based recommender systems towards fostering innovations of recommender system research. A taxonomy of deep learning based recommendation models is presented and used to categorize the surveyed articles. Open problems are identified based on the analytics of the reviewed works and discussed potential solutions.

Sustainable Offloading in Mobile Cloud Computing: Algorithmic Design and Implementation

The concept of Mobile Cloud Computing (MCC) allows mobile devices to extend their capabilities, enhancing computing power, expanding storage capacity, and prolonging battery life. MCC provides these enhancements by essentially offloading tasks and data to the Cloud resource pool. In particular, MCC-based energy-aware offloading draws increasing attention due to the lately steep increase in the number of mobile applications and the enduring limitations of lithium battery technologies. This work gathers and analyzes the recent energy-aware offloading protocols and architectures, which target prolonging battery life through load relief. These recent solutions concentrate on energy-aware resource management issues of mobile devices and Cloud resources in the scope of the task offloading. This survey provides a comparison among system architectures by identifying their notable advantages and disadvantages. The existing enabling frameworks are categorized and compared based on the stage of the task offloading process and resource management types. This study then ends by presenting a discussion on open research issues and potential solutions.

Issues and Challenges of Load Balancing Techniques in Cloud Computing: A Survey

With the growth in computing technologies, Cloud Computing has added new paradigm to user services which allow to access Information Technology (IT) services on the basis of pay-per-use, anytime and at any location. Due to flexibility in cloud services a large number of organizations are shifting their business to the cloud and also service providers establishing more data centers to provide services to the users. However, there is a constant pressure to provide cost effective execution of tasks and proper utilization of resources. In the literature, a plenty of work has been done by researchers in this field to improve the performance and resource usage based on load balancing, task scheduling, resource management, quality of service (QoS) and workload management. Load balancing in cloud facilitates data centers to avoid the situation of overloading/ under-loading in virtual machines which itself being a challenge in the field of cloud computing. So, it becomes a necessity for developers and researchers to design and implement a suitable load balancer for parallel and distributed cloud environments. This paper provides an insight into the strengths and weaknesses along with issues allied with existing load balancing techniques to help the researchers to develop more effective algorithms.

Linked Vocabulary Recommendation Tools for Internet of Things: A Survey

The Semantic Web emerged with the vision of eased integration of heterogeneous, distributed data on the Web. The approach fundamentally relies on the linkage between and reuse of previously published vocabularies to facilitate semantic interoperability. In recent years, the Semantic Web has been perceived as a potential enabling technology to overcome interoperability issues in the Internet of Things (IoT), especially for service discovery and composition. Despite the importance of making vocabulary terms discoverable and selecting most suitable ones in forthcoming IoT applications, no state-of-the-art survey of tools achieving such recommendation tasks exists to date. This survey covers this gap, by specifying an extensive evaluation framework and assessing linked vocabulary recommendation tools. Furthermore, we discuss challenges and opportunities of vocabulary recommendation and related tools in the context of emerging IoT ecosystems. Overall, 40 recommendation tools for linked vocabularies were evaluated, both, empirically and experimentally. Some of the key findings include that (i) many tools neglect to thoroughly address both, the curation of a vocabulary collection and effective selection mechanisms; (ii) modern information retrieval techniques are underrepresented; and (iii) the reviewed tools that emerged from Semantic Web use cases are not yet sufficiently extended to fit today's IoT projects.

A Survey of Cloudlet-based Mobile Augmentation Approaches for Resource Optimization

Mobile Devices (MD) face resource scarcity challenges due to limited energy and computational resources. Mobile Cloud Computing (MCC) offers resource rich environment to MDs for offloading compute intensive tasks addressing resource scarcity challenges. However, users are unable to exploit its full potential due to challenges of distance, limited bandwidth, and seamless connectivity between Remote Cloud (RC) and mobile devices in the conventional MCC model. The cloudlet1 based solution is widely used to address these challenges. The response of cloudlet based solution is faster than the conventional mobile cloud computing model rendering it suitable for the Internet of Things (IoT) and Smart Cities (SC). However, with the increase in devices and workloads, the cloudlet based solution has to deal with resource scarcity challenges thus forwarding the requests to remote cloud. This study has been carried out to provide an insight into existing cloudlet based Mobile Cloud Augmentation (CtMA) approaches and highlight the underlying limitations. Furthermore, numerous performance parameters have been identified and their detailed comparative analysis has been used to quantify the efficiency of CtMA approaches.

Brownout Approach for Adaptive Management of Resources and Applications in Cloud Computing Systems: A Taxonomy and Future Directions

Cloud computing has been regarded as an emerging approach to provisioning resources and managing applications. It provides attractive features, such as on-demand model, scalability enhancement, and management costs reduction. However, cloud computing systems continue to face problems such as hardware failures, overloads caused by unexpected workloads, or the waste of energy due to inefficient resource utilization, which all result to resource shortages and application issues such as delays or saturated eventually. A paradigm named brownout has been applied to handle these issues by adaptively activating or deactivating optional parts of applications or services to manage resource usage in cloud computing system. Brownout has successfully shown it can avoid overloads due to changes in the workload and achieve better load balancing and energy saving effects. This paper proposes a taxonomy of brownout approach for managing resources and applications adaptively in cloud computing systems and carries out a comprehensive survey. It identifies open challenges and offers future research directions.

Engagement in HCI: Conception, Theory and Measurement

The design of usable and useful products and services is supported by an understanding of users needs and experiences. For designers of products and services of every kind, engaging users is a priority. However, to date, there has been limited analysis of how HCI and computer science research deals with engagement. Questions persist concerning the conception, abstraction, and measurement of this concept. This paper presents a systematic review of engagement, drawing upon a final corpus of 351 papers and 102 unique definitions of engagement. We describe the diversity of interpretation, theory, and measurement of engage- ment, along with strategies for the design of engaging experiences. We map the current state of engagement research, discuss the value of the concept and its relationship to other popular terms, and present a set of guidelines and opportunities for future research.

Edge Cloud Offloading Algorithms: Issues, Methods, and Perspectives

Mobile devices supporting the "Internet of Things" (IoT), often have limited capabilities in computation, battery energy, and storage space, especially to support resource-intensive applications involving virtual reality (VR), augmented reality (AR), multimedia delivery and artificial intelligence (AI), which could require broad bandwidth, low response latency and large computational power. Edge cloud or edge computing is an emerging topic and technology that can tackle the deficiency of the currently centralized-only cloud computing model and move the computation and storage resource closer to the devices in support of the above-mentioned applications. To make this happen, efficient coordination mechanisms and "offloading'' algorithms are needed to allow the mobile devices and the edge cloud to work together smoothly. In this survey paper, we investigate the key issues, methods, and various state-of-the-art efforts related to the offloading problem. We adopt a new characterizing model to study the whole process of offloading from mobile devices to the edge cloud. Through comprehensive discussions, we aim to draw an overall "big picture'' on the existing efforts and research directions. Our study also indicates that the offloading algorithms in edge cloud have demonstrated profound potentials for future technology and application development.

A Survey and Taxonomy of Core Concepts and Research Challenges in Cross-Platform Mobile Development

Developing applications targeting mobile devices is a complex task involving numerous options, technologies and trade-offs, much so due to the proliferation and fragmentation of devices and platforms. As a result of this, cross-platform app development has enjoyed the attention of practitioners and academia for the previous decade. Throughout this review, we assess the academic body of knowledge and report on the state of research on the field. We do so with a particular emphasis on core concepts, including those of user experience, device features, performance, and security. Our findings illustrate that the state of research demand for empirical verification of an array of unbacked claims, and that a particular focus on qualitative user-oriented research is essential. Through our outlined taxonomy and state of research overview, we identify research gaps and challenges, and provide numerous suggestions for further research.

Survey on Computational Trust and Reputation models

Over the recent years computational trust and reputation models have become an invaluable method to improve computer-computer and human-computer interaction. As a result a considerable amount of research has been published trying to solve open problems and improving existing models. This survey will bring additional structure into the already conducted research. After introducing the underlying major concepts, a new integrated review and analysis scheme for reputation and trust models is proposed. Using highly recognized review papers in this domain as a basis, this paper will also introduce additional evaluation metrics to account for characteristics so far unstudied. A subsequent application of the new review schema on 22 recent publications in this scientific field revealed interesting insights. While the area of computational trust and reputation models is still a very active research branch, the analysis was able to show that some parts already started to converge, whereas other elements are still subject to vivid discussions.

Machine Learning for Survival Analysis: A Survey

Accurately predicting the occurrence time for the event of interest is a critical problem in longitudinal data analysis. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after some time point or the instances did not experience any event during the monitoring period. Such a phenomenon is called censoring which is typically handled using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to effectively overcome this censoring issue. In addition, many machine learning algorithms are adapted to effectively handle survival data and tackle other challenging problems that arise from real-world data. In this survey, we provide a comprehensive and structured review of the representative statistical methods along with the machine learning techniques used in survival analysis and give a well-organized comprehensive taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and illustrate several successful applications in various real-world application fields. We hope that this paper will provide a better understanding of the recent advances on survival analysis and offer some guidelines on how to apply these approaches for solving new problems that arise in applications with censored data.

A Survey of Petri nets Slicing

Petri nets slicing is a technique that aims to improve the verification of systems modeled in Petri nets. Petri nets slicing was first developed to facilitate debugging but then used for the alleviation of the state space explosion problem for the model checking of Petri nets. In this article, different slicing techniques are studied along with their algorithms introducing: i) a classification of Petri nets slicing algorithms based on their construction methodology and objective (such as improving state space analysis or testing), ii) a qualitative and quantitative discussion and comparison of major differences such as accuracy and efficiency, iii) a syntactic unification of slicing algorithms that improve state space analysis for easy and clear understanding, and iv) applications of slicing for multiple perspectives. Furthermore, some recent improvements to slicing algorithms are presented, which can certainly reduce the slice size even for strongly connected nets. A noteworthy use of this survey is for the selection and improvement of slicing techniques for optimizing the verification of state event models.

A Taxonomy and Future Directions for Sustainable Cloud Computing: 360 Degree View

The cloud computing paradigm offers on-demand services over the Internet and supports a wide variety of applications. With the recent growth of Internet of Things (IoT) based applications the usage of cloud services is increasing exponentially. The next generation of cloud computing must be energy-efficient and sustainable to fulfil the end-user requirements which are changing dynamically. Presently, cloud providers are facing challenges to ensure the energy efficiency and sustainability of their services. The usage of large number of cloud datacenters increases cost as well as carbon footprints, which further effects the sustainability of cloud services. In this paper, we propose a comprehensive taxonomy of sustainable cloud computing. The taxonomy is used to investigate the existing techniques for sustainability that need careful attention and investigation as proposed by several academic and industry groups. Further, the current research on sustainable cloud computing is organized into several categories: application design, energy management, renewable energy, thermal-aware scheduling, virtualization, capacity planning and waste heat utilization. The existing techniques have been compared and categorized based on the common characteristics and properties. A conceptual model for sustainable cloud computing has been proposed along with discussion on future research directions.

A Manifesto for Future Generation Cloud Computing: Research Directions for the Next Decade

The Cloud computing paradigm has revolutionized the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.

As technology becomes more advanced, those who design, use and are otherwise affected by it want to know that it will perform correctly, and understand why it does what it does, and how to use it appropriately. In essence they want to be able to trust the systems that are being designed. In this survey we present assurances that are the method by which users can understand how to trust autonomous systems. Trust between humans and autonomy is reviewed, and the implications for the design of assurances are highlighted. A survey of existing research related to assurances is presented. Much of the surveyed research originates from fields such as interpretable, comprehensible, transparent, and explainable machine learning, as well as human-computer interaction, and e-commerce. Several key ideas are extracted from this work in order to refine the definition of assurances. The design of assurances is found to be highly dependent not only on the capabilities of the autonomous system, but on the characteristics of the human user, and the appropriate trust-related behaviors. Several directions for future research are identified and discussed.

A Perspective Analysis of Handwritten Signature Technology

Handwritten signatures are biometric traits increasingly at the centre of debate by the scientific community. Over the last forty years, the interest in signature studies has grown steadily, having as its main reference in the application of automatic signature verification, as previously published reviews in 1989, 2000 and 2008 bear witness. Ever since, and over the last ten years, the application of handwritten signature technology has strongly evolved and much research has focused on the possibility of applying systems based on handwritten signature analysis and processing to a multitude of new fields. After several years of haphazard growth of this research area, it is time to assess its current developments for their applicability in order to draw a structured way forward. This perspective reports a systematic review of the last ten years of the literature on handwritten signatures with respect to the new scenario, focusing on the most promising domains of research and trying to elicit possible future research directions in this subject.

Formal Approaches to Secure Compilation

Secure compilation is a discipline aimed at developing compilers that preserve the security properties of the source programs they take as input in the target programs they produce as output. This discipline is broad in scope, targeting languages with a variety of features (including objects, higher-order functions, dynamic memory allocation, call/cc, concurrency) and employing a range of different techniques to ensure that source-level security is preserved at the target level. This paper provides a survey of the existing literature on formal approaches to secure compilation with a focus on those that prove fully abstract compilation, which has been the criterion adopted by much of the literature thus far. This paper then describes the formal techniques employed to prove secure compilation in existing work, introducing relevant terminology, and discussing the merits and limitations of each work. Finally, this paper discusses open challenges and possible directions for future work in secure compilation.

Recent Developments in Cartesian Genetic Programming and its variants

Cartesian Genetic Programming (CGP) is a variant of Genetic Programming with several advantages. From the last one and half decades, CGP has been further extended to several other forms with lots of promising advantages and applications. This paper formally discusses the classical form of CGP and its six different variants proposed so far which includes Embedded CGP, Self-Modifying CGP, Recurrent CGP, Mixed-Type CGP, Balanced CGP and Differential CGP. Also, this paper makes a comparison among these variants in terms of population representations, various constraints in representation, operators and functions applied, and algorithms used. Further, future work directions and open problems in the area have been discussed.

A Survey of Communication Performance Models for High Performance Computing

The survey aims to present the state of the art in analytic communication performance models, providing sufficiently detailed description of particularly noteworthy efforts. Modeling the cost of communications in computer clusters is an important and challenging problem. It provides insights into the design of the communication pattern of parallel scientific applications and mathematical kernels and sets a clear ground for optimization of their deployment in the increasingly complex HPC infrastructure. The survey provides background information on how different performance models represent the underlying platform and shows the evolution of these models over time, since early clusters of single core processors to present-day multi-core and heterogeneous platforms. Perspective directions for future research in the area of analytic communication performance modeling conclude the survey.

STRAM: Measuring the Trustworthiness of Computer-based Systems

Various system metrics have been proposed for measuring the quality of computer-based systems, such as dependability and security metrics for estimating their performance and security characteristics. As computer-based systems grow in complexity with many sub-systems or components, measuring their quality in multiple dimensions is a challenging task. This work tackles the problem of measuring the quality of computer-based systems based on the four key attributes of trustworthiness, security, trust, resilience and agility. In particular, we propose a system-level trustwothiness metric framework that accommodates four submetrics, called STRAM (Security, Trust, Resilience, and Agility Metrics). The proposed STRAM framework offers a hierarchical ontology structure where each submetric is defined as a sub-ontology. Moreover, this work proposes developing and incorporating metrics describing key assessment tools, including Vulnerability Assessment, Risk Assessment and Red Teaming, to provide additional evidence into the measurement and quality of trustworthy systems. We further discuss how assessment tools are related to measuring the quality of computer-based systems and the limitations of the state-of-the-art metrics and measurements. Finally, we suggest future research directions for system-level metrics research towards measuring fundamental attributes of the quality of computer-based systems and improving the current metric and measurement methodologies.

Machine Learning in Network Centrality Measures: Tutorial and Outlook

Complex networks are ubiquitous to several Computer Science domains. Centrality measures are an important analysis mechanism to uncover vital elements of complex networks. However, these metrics have high computational costs and requirements that hinder their applications in large real-world networks. In this tutorial, we explain how the use of neural network learning algorithms can render the application of the metrics in complex networks of arbitrary size. Moreover, the tutorial describes how to identify the best configuration for neural network training and learning such for tasks, besides presenting an easy way to generate and acquire training data. We do so by means of a general methodology, using complex network models adaptable to any application. We show that a regression model generated by the neural network successfully approximates the metric values and therefore are a robust, effective alternative in real-world applications. The methodology and proposed machine learning model uses only a fraction of time with respect to other approximation algorithms, which is crucial in complex network applications.

Cloud Brokerage: A Systematic Survey

Background The proliferation of cloud providers and provisioning levels has opened a space for cloud brokerage services. Brokers intermediate between cloud customers and providers to assist the customer in selecting the most suitable cloud service, helping to manage the dimensionality, heterogeneity, and uncertainty associated with cloud services. Objective This paper identifies and classifies approaches to realise cloud brokerage. By doing so, this paper presents an understanding of the state of the art and a novel taxonomy to characterise cloud brokers.Method We conducted a systematic literature survey to compile studies related to cloud brokerage and explore how cloud brokers are engineered. We analysed the studies from multiple perspectives, such as motivation, functionality, engineering approach, and evaluation methodology. Results The survey resulted in a knowledge base of current proposals for realising cloud brokers. The survey identified surprising differences between the studies implementations, with engineering efforts directed at combinations of market-based solutions, middlewares, toolkits, algorithms, semantic frameworks, and conceptual frameworks. Conclusion Our comprehensive meta-analysis shows that cloud brokerage is still a formative field. There is no doubt that progress has been achieved in the field but considerable challenges remain to be addressed. This survey identifies such challenges and directions for future research.

Knee articular cartilage segmentation from MR images: A review

Articular cartilage (AC) is flexible, soft yet stiff tissue that can be visualized and interpreted using magnetic resonance imaging (MRI) for the assessment of knee osteoarthritis (OA). Segmentation of AC from MR images is a challenging task that has been investigated widely. The development of computational methods to segment AC is highly dependent on various image parameters, quality, tissue structure and acquisition protocol involved. This review focuses on the challenges faced during AC segmentation from MR images followed by the discussion on computational methods for semi/fully automated approaches whilst performances parameters and their significances have also been explored. Furthermore, hybrid approaches used to segment AC are reviewed. This review indicates that despite the challenges in AC segmentation, semi-automated method utilizing advanced computational methods such as active contour and clustering have shown significant accuracy. Fully automated AC segmentation methods have obtained moderate accuracy and shows suitability for extensive clinical studies whilst advanced methods are being investigated that has led to achieve significantly better sensitivity. In conclusion, this review indicates that research in AC segmentation from MR images is moving towards the development of fully automated methods using advanced multi-level, multi-data and multi-approach techniques to provide assistance in clinical studies.

All ACM Journals | See Full Journal Index

Search CSUR
enter search term and/or author name below: