We survey foundational features underlying modern graph query languages. We first discuss two popular graph data models: edge-labelled graphs, where nodes are connected to other nodes by directed, labelled edges; and property graphs, where nodes and edges can have attributes. Next we discuss the two most basic graph querying functionalities: graph patterns and navigational expressions. We start with graph patterns, in which a graph-structured query is matched against the data. Thereafter we discuss navigational expressions, in which patterns can be matched recursively against the graph to navigate paths of arbitrary length; we give an overview of what kinds of expressions have been proposed, and how such expressions can be combined with graph patterns. We also discuss a variety of semantics under which queries using the previous features can be evaluated, what effects the introduction of additional features and the selection of semantics has on complexity, as well as offering examples of said features in three modern languages that can be used to query graphs: SPARQL, Cypher and Gremlin. We conclude with discussion of the importance of formalisation for graph query languages, as well as possible future directions in which such languages can be extended.
Automated Vehicle Classification (AVC) based on vision sensors has received active attention from researchers, due to heightened security concerns in Intelligent Transportation Systems. In this work, we propose a categorization of AVC studies based on the granularity of classification, namely Vehicle Type Recognition (VTR), Vehicle Make Recognition (VMR) and Vehicle Make and Model Recognition (VMMR). For each category of AVC systems, we present a comprehensive review and comparison of features extraction, global representation, and classification techniques. The various datasets proposed over the years for AVC are also compared in light of the real-world challenges they represent, and those they do not. The major challenges involved in each category of AVC systems are presented, highlighting open problems in this area of research. Finally, we conclude by providing future directions of research in this area, paving the way towards efficient large-scale AVC systems. This survey shall help researchers interested in the area to analyze works completed so far in each category of AVC, focusing on techniques proposed for each module, and to chalk out strategies to enhance state-of-the-art technology.
During a processor development cycle, validation is performed on the first fabricated chip to detect and fix design errors. Design errors due to functional issues occur when a unit in a design does not meet its specification. Their chances of occurrence are high when new features are added in a processor. Therefore, the task of verifying the functionality independently and in coordination with other units increases for multicore architectures. Several new techniques are being proposed in the field of functional validation. In this paper, we undertake a survey of these techniques to identify areas that need to be addressed for multicore designs. We start with an analysis of design errors in two multicore architectures. We then survey different functional validation techniques based on hardware, software and formal methods and propose a comprehensive taxonomy for each of these approaches. We also perform a critical analysis to identify gaps in existing research and propose new research directions for validation of multicore architectures.
Making cities smarter help improve city services and increase citizens quality of life. Information and communication technologies (ICT) are fundamental for progressing towards smarter city environments. Smart City software platforms potentially support the development and integration of Smart City applications. However, the ICT community must overcome current signicant technological and scientic challenges before these platforms can be widely used. This paper surveys the state-of-the-art in software platforms for Smart Cities. We analyzed 23 projects with respect to the most used enabling technologies, as well as functional and non-functional requirements, classifying them into four categories: Cyber-Physical Systems, InternetofThings,BigData,andCloudComputing.Basedontheseresults,wederivedareferencearchitecture to guide the development of next-generation software platforms for Smart Cities. Finally, we enumerated the most frequently cited open research challenges, and discussed future opportunities. This survey gives important references for helping application developers, city managers, system operators, end-users, and Smart City researchers to make project, investment, and research decisions.
An enormous amount of research has been conducted in the area of positioning systems and thus it calls for a detailed literature review of recent localization systems. This paper focuses on recent developments of non-Global Positioning System (GPS) localization/positioning systems. We have presented a new hierarchical method to classify various positioning systems. A comprehensive performance comparison of the techniques and technologies against multiple performance metrics along with the limitations is presented. A few indoor positioning systems have emerged as more successful in particular application environments than others, which are presented at the end.
Cyber risk management largely reduces to a race for information between defenders and attackers. Defenders can gain advantage in this race by sharing cyber risk information with each other. Yet, defenders often share less than is socially desirable, because sharing decisions are guided by selfish rather than altruistic reasons. A growing line of research studies these strategic aspects that drive defenders' sharing decisions. The present survey systematizes these works in a novel framework. It provides a consolidated understanding of defenders' strategies to privately or publicly share information, and enables us to distill trends in the literature and identify future research directions. The review also reveals that many theoretical works assume cyber risk information sharing to be beneficial, while corresponding empirical validations are missing.
It is unlikely that an hacker is able to compromise sensitive data that is stored in an encrypted form. However, when data is to be processed, it has to be decrypted, becoming vulnerable to attacks. Homomorphic encryption fixes this vulnerability by allowing one to compute directly on encrypted data. In this survey, both previous and current Somewhat Homomorphic Encryption (SHE) schemes are reviewed, and the more powerful and recent Fully Homomorphic Encryption (FHE) schemes are comprehensively studied. The concepts that support these schemes are presented, and their performance and security are analyzed from an engineering standpoint.
The Experience Sampling Method (ESM) is used by scientists from various disciplines to gather insights into the intrapsychic elements of human life. Researchers have used the ESM in a wide variety of studies, with the method seeing increased popularity. Mobile technologies have enabled new possibilities for the use of the ESM, while simultaneously leading to new conceptual, methodological, and technological challenges. In this survey, we provide an overview of the history of the ESM, usage of this methodology in the computer science discipline, as well as its evolution over time. Next, we identify and discuss important considerations for ESM studies on mobile devices, and analyse the particular methodological parameters scientists should consider in their study design. We reflect on the existing tools that support the ESM methodology and discuss the future development of such tools. Finally, we discuss the effect of future technological developments on the use of the ESM and identify areas requiring further investigation.
Ray tracing has long been considered as the next generation technology for graphics rendering. Recent years witnessed a strong momentum to adopt the ray tracing based rendering techniques on consumer level platforms due to the inability of further enhancing user experience by increasing display resolution. On the other hand, the computing workload of ray tracing is still overwhelming. A 10-fold performance gap has to be narrowed for real-time applications, even on the latest graphics processing units (GPUs). As a result, hardware acceleration techniques are critical to deliver a satisfying level performance, while at the same time meet an acceptable power budget. A large body of research on ray tracing hardware has been proposed over the past decade. This paper is aimed to provide a timely survey on hardware techniques to accelerate the ray tracing algorithm. A quantitative profiling on the ray tracing workload is first presented. We then review hardware techniques for the main functional blocks in a ray tracing pipeline. On such a basis, the ray tracing microarchitectures for both ASIC and processors are surveyed by following a systematic taxonomy.
The aim of this article is to provide an understanding of social networks as a useful addition to the standard tool-box of techniques used by system designers. To this end, we give examples of how data about social links have been collected and used in different application contexts. We develop a broad taxonomy-based overview of common properties of social networks, review how they might be used in different applications, and point out potential pitfalls where appropriate. We propose a framework, distinguishing between two main types of social network-based user selection personalised user selection which identifies target users who may be relevant for a given source node, using the social network around the source as a context, and generic user selection or group delimitation, which filters for a set of users who satisfy a set of application requirements based on their social properties. Using this framework, we survey applications of social networks in three typical kinds of application scenarios: recommender systems, content-sharing systems (e.g., P2P or video streaming), and systems which defend against users who abuse the system (e.g., spam or sybil attacks). In each case, we discuss potential directions for future research that involve using social network properties.
Programming languages expressiveness is limited by paradigm because it is focused on solving abstraction problems without considering expressiveness of abstractions described using natural language. So, authors have developed tools for natural language software development. In this paper, many works consisting of tools that use some natural language level and domain-specific languages that have an expressiveness level similar to natural languages are reviewed. The goal of the paper is to present a review and highlight the problems that were solved and those left aside. Also, it addresses the fact that a naturalistic language based on a model is not reported.
Nano-crossbar arrays have emerged as a promising and viable technology to improve computing performance of electronic circuits beyond the limits of current CMOS. Arrays offer both structural efficiency with reconfiguration and prospective capability of integration with different technologies. However, certain problems need to be addressed and the most important one is the prevailing occurrence of faults. Considering fault rate projections as high as 20\% that is much higher than those of CMOS, it is fair to expect sophisticated fault tolerance methods. The focus of this survey paper is the assessment and evaluation of these methods and related algorithms applied in logic mapping and configuration processes. As a start, we concisely explain reconfigurable nano-crossbar arrays with their fault characteristics and models. Following that, we demonstrate configuration techniques of the arrays in the presence of permanent faults and elaborate on two main fault tolerance methodologies, namely defect-unaware and defect-aware approaches, with a short review on advantages and disadvantages. Next, we overview fault tolerance approaches for transient faults. In the experimental results section, we give detailed results of the algorithms regarding their strengths and weaknesses with a comprehensive yield, success rate, and runtime analysis. As a conclusion, we overview the proposed algorithms with future directions and upcoming challenges.
Locality of information is a major concern for the design of distributed algorithms. With the LOCAL model, theoretical research already established a common model of locality that has gained little practical relevance. As a result, practical research de facto lacks any common locality model. The only common denominator among practitioners is that a local algorithm is distributed with a limited scope of interaction. This paper closes the gap by introducing four practically motivated classes of locality that successively weaken the strict requirements of the LOCAL model. These classes are applied to categorize and to survey 32 local algorithms from nine different application domains. A detailed comparison shows the practicality of the classification and provides interesting insights. For example, the majority of algorithms limit the scope of interaction to at most two hops, independent of their locality class. Moreover, the application domain of algorithms tends to influence their degree of locality.
Complex Event Recognition applications exhibit various types of uncertainty, ranging from incomplete and erroneous data streams to imperfect complex event patterns. We review Complex Event Recognition techniques that handle, to some extent, uncertainty. We examine techniques based on automata, probabilistic graphical models and first-order logic, which are the most common ones, and approaches based on Petri Nets and Grammars, which are less frequently used. A number of limitations are identified with respect to the employed languages, their probabilistic models and their performance, as compared to the purely deterministic cases. Based on those limitations, we highlight promising directions for future work.
Automatic sarcasm detection is the task of predicting sarcasm in text. This is a crucial step to sentiment analysis, considering prevalence and challenges of sarcasm in sentiment-bearing text. Beginning with an approach that used speech-based features, automatic sarcasm detection has witnessed great interest from the sentiment analysis community. This paper is the first known compilation of past work in automatic sarcasm detection. We observe three milestones in the research so far: semi-supervised pattern extraction to identify implicit sentiment, use of hashtag-based supervision, and incorporation of context beyond target text. In this paper, we describe datasets, approaches, trends and issues in sarcasm detection. We also discuss representative performance values, shared tasks and pointers to future work, as given in prior works. In terms of resources to understand the state-of-the-art, the survey presents several useful illustrations - most prominently, a table that summarizes past papers along different dimensions such as features, annotation techniques, data forms, etc.
Detecting and analyzing dense groups or communities from social and information networks has attracted immense attention over last one decade due to its enormous applicability in different domains. Community detection is an ill-defined problem, as the nature of the communities is not known in advance. The problem has turned out to be even complicated due to the fact that communities emerge in the network in various forms - disjoint, overlapping, hierarchical etc. Various heuristics have been proposed depending upon the applications in hand. All these heuristics have been materialized in the form of new metrics, which in most cases are used as optimization functions for detecting the community structure, or provide an indication of the goodness of detected communities during evaluation. There arises a need for an organized and detailed survey of the metrics proposed with respect to community detection and evaluation. This paper presents a detailed discussion of the state-of-the-art metrics used for the detection and the evaluation of community structure. Finally, experiments are conducted on synthetic and real networks to present a comparative analysis of these metrics in measuring the goodness of the detected community structure.
Algorithmic debugging is a technique proposed in 1982 by E.Y. Shapiro in the context of logic programming. This survey shows how the initial ideas have been developed to become a widespread debugging schema fitting many different programming paradigms, and with applications out of the program debugging field. We describe the general framework and the main issues related to the implementations in different programming paradigms, and discuss several proposed improvements and optimizations. We also review the main algorithmic debugger tools that have been implemented so far and compare their features. From this comparison, we elaborate a summary of desirable characteristics that should be considered when implementing future algorithmic debuggers.
This article presents an annotated bibliography on automatic software repair. Automatic software repair consists of automatically finding a solution to software bugs, without human intervention. The uniqueness of this article is that it spans the research communities that contribute to this body of knowledge: software engineering, dependability, operating systems, programming languages and security. Furthermore, it provides a novel and structured overview of the diversity of bug oracles and repair operators used in the literature.
The Smart Home concept integrates smart applications in the daily human life. In recent years, Smart Homes have increased security and management challenges due to low capacity of small sensors, multiple connectivity to the internet for efficient applications (use of big data and cloud computing) and heterogeneity of home systems, which imposes that inexpert users should configure devices and micro-systems. This paper presents current security and management approaches in Smart Homes and shows the good practices imposed into the market for developing secure systems in houses. At last, we propose future solutions for efficiently and securely managing the Smart Homes.
Vehicular networks and their associated technologies enable an extremely varied plethora of applications and therefore attract increasing attention from a wide audience. However vehicular networks also have many challenges that arise mainly due to their dynamic and complex environment. Fuzzy Logic, known for its ability to deal with complexity, imprecision and model non-deterministic problems, is a very promising technology for use in such a dynamic and complex context. This paper presents the first comprehensive survey of research on Fuzzy Logic approaches in the context of vehicular networks, and provides fundamental information which enables readers to design their own Fuzzy Logic systems in this context. As such, the paper describes the Fuzzy Logic concepts with emphasis on their implementation in vehicular networks, includes a classification and thorough analysis of the Fuzzy Logic-based solutions in vehicular networks and discusses how Fuzzy Logic could empower the key research directions in the 5G-enabled vehicular networks, the next generation of vehicular communications.
To program parallel systems efficiently and easily, a wide range of programming models appeared, each with different choices concerning synchronization and communication between parallel entities. Among them, the actor model is based on loosely coupled parallel entities that communicate trough asynchronous messages thanks to the use of mailboxes. Some actor languages provide a strong integration with the object-oriented concepts; they are often called active object languages. This paper reviews four major actor and active object languages and compares them according to well-chosen dimensions that cover the programming paradigms and their implementation.
The task of quantification consists in providing an aggregate estimation (e.g. the class distribution in a classification problem) for unseen test sets, applying a model that is trained using a training set with a different data distribution.} Several real-world applications demand this kind of methods that do not require predictions for individual examples and just focus on obtaining accurate estimates at an aggregate level. During the past few years, several quantification methods have been proposed from different perspectives and with different goals. This paper presents a unified review of the main approaches with the aim of serving as an introductory tutorial for newcomers in the field.
Recently, multimedia researchers have added several so called new media to the traditional multimedia components (e.g. olfaction, haptic and gustation). The inclusion of such stimuli in addition to traditional media components is typically labeled as multiple sensorial media or mulsemedia. Capturing multimedia user perceived Quality of Experience (QoE) is already non-trivial and the addition of multiple sensorial media components increases this challenge. No standardized methodology exists to conduct subjective quality assessments of multiple sensorial media applications. To date researchers have employed different aspects of audiovisual standards to assess user QoE of multiple sensorial media applications and thus, a fragmented approach exists. In this paper, the authors highlight issues researchers face from numerous perspectives including applicability (or lack of) existing audio-visual standards to evaluate user QoE and lack of result comparability due to varying approaches, specific requirements of olfactory-based multiple sensorial media applications, and novelty associated with these applications. Finally, based on the diverse approaches in the literature and the collective experience of authors, this paper provides a tutorial and recommendations on the key steps to conduct olfactory-based multiple sensorial media QoE evaluation.
Modeling pedestrian dynamics and their implementation in a computer are challenging and important issues in the knowledge areas of transportation and computer simulation. The aim of this paper is to provide a bibliographic outlook so that the reader could have a quick access to the most relevant works related with this problem. We have used three main axes to organise the paper contents: pedestrian models, validation techniques and multiscale approaches. The backbone of the paper is the classification of existing pedestrian models; we have organised the works in the literature under five categories, according to the techniques used for the operational level in each pedestrian model. Then, the main existing validation methods, oriented to evaluate the behavioural quality of the simulation systems, are reviewed. Furthermore, we review the key issues that arise when facing multiscale pedestrian modeling, where we firstly focus on the behavioural scale (combinations of micro and macro pedestrian models) and secondly, on the scale size (from individuals to crowds). Finally, the paper concludes with a discussion about the contributions that different knowledge fields can do in a near future to this exciting area.
Digital advances have transformed the face of automatic music generation since its beginnings at the dawn of computing. Despite the many breakthroughs, issues such as the musical tasks targeted by different machines and the degree to which they succeed remain open questions. We present a functional taxonomy for music generation systems with reference to existing systems according to the purposes for which they were designed. The taxonomy also reveals the inter-relatedness among the systems. This design-centred approach contrasts with predominant methods-based surveys, and facilitates the identification of grand challenges so as to set the stage for new breakthroughs.