Visual object tracking has become an active computer vision research problem and increasing number of tracking algorithms are being proposed in the recent years. Tracking has been used in various real world applications such as human-computer interaction, autonomous vehicles, robotics, and surveillance and security. In this study, we review latest trends and advances in the tracking algorithms and evaluate the robustness of different trackers based on their feature extraction methods. The first part of this work comprises a comprehensive survey of the recently proposed trackers. We broadly categorize trackers into Correlation Filter based Trackers (CFTs) and Non-CFTs. Each category is further classified into various types based on the architecture of the tracking mechanism. In the second part of this work, we experimentally evaluated 24 different trackers for robustness, and compared handcrafted and deep feature based trackers. We analyzed the performance of these trackers over eleven different challenges. The relative rank of algorithms based on their performance varies over different challenges. Our study concluded that discriminative correlation filter based trackers performed better than the others over each challenge. Our extensive experimental study over three benchmarks also revealed that inclusion of different types of regularizations over DCF results in boosted tracker performance.
Thermal modeling and simulation have become imperative in recent years owing to the increased power density of high performance microprocessors. Temperature is a first order design criteria, and hence spe- cial consideration has to be given to it in every stage of the design process. If not properly accounted for, temperature can have disastrous effects on the performance of the chip, often leading to failure. In order to streamline research efforts, there is a strong need for a comprehensive survey of the techniques and tools available for thermal simulation. This will help new researchers entering the field to quickly familiarize themselves with the state of the art, and enable existing researchers to further improve upon their proposed techniques. In this paper we present a survey of the package level thermal simulation techniques developed over the last two decades.
Negative sequential patterns (NSPs) can provide more informative and actionable knowledge than traditional positive sequential patterns (PSPs) by considering both occurring and non-occurring items, which appear in many applications. However, since the research on negative sequence analysis (NSA) is still at an early stage and NSP mining involves very high computational complexity and a very large search space, there is no widely accepted problem statement on NSP mining, and different constraint settings and negative containments have been proposed. Nevertheless, although several NSP mining algorithms have been proposed, there are no general and systemic evaluation criteria available to assess them comprehensively. This paper conducts a comprehensive technical review of existing research on NSA. We explore and formalize a generic problem statement of NSA, investigate and compare the main definitions of constraints and negative containment, and compare existing NSP mining algorithms. By processing a group of evaluation criteria from multiple perspectives, theoretical and experimental analysis on existing NSP algorithms is conducted on typical datasets. Several new research opportunities are outlined.
Smaller transistor sizes and reduction in voltage levels in modern microprocessors induce higher soft error rates. This trend makes the reliability primary design constraint for computer systems. Redundant multithreading (RMT) makes use of parallelism in modern systems by employing thread-level time redundancy for fault detection and recovery. RMT can detect faults by running the identical copies of the program as separate threads in parallel execution units with identical inputs, and comparing their outputs. In this article, we present a survey of RMT implementations at different architectural levels with several design considerations. We explain the implementations in seminal papers and their extensions, also discuss the design choices employed by the techniques. We review both hardware and software approaches by presenting the main characteristics, and analyze the studies with different design choices regarding their strengths and weaknesses. We also present a classification to help potential users to find a suitable method for their requirement and to guide researchers planning to work on this area by providing an insight into the future trend.
Although most human-technology interactions are still based on traditional desktop/mobile interfaces that involve primarily the visual and audio senses, in recent years we have witnessed a progress towards multisensory experiences. Companies are proposing new additions to the multisensory world and are unveiling new products that promise to offer amazing experiences exploiting mulsemedia - multiple sensorial media - where users can perceive odors, tastes, and the sensation of wind blowing against their face. Whilst researchers, practitioners and users alike are faced with a wide-range of such new devices, relatively little work has been undertaken to summarize efforts and initiatives in this area. The current paper addresses this shortcoming in two ways - firstly, by presenting a survey of devices targeting senses beyond that of sight and hearing; secondly, by describing an approach to guide newcomers and experienced practitioners alike to build their own mulsemedia environment, both in a desktop setting and in an immersive 360? environment.
The use of machine learning (ML) approaches in smart building applications is reviewed in this paper. We split existing solutions into two main categories, occupant-centric vs. energy/devices centric. The first one groups solutions that use ML for aspects related to the occupants, including 1) occupancy estimation and identification, 2) activity recognition, and 3) estimating preferences and behavior. The second category groups solutions where the ML approaches are used to estimate aspects related either to energy or devices. They are divided into the three sub-categories 1) energy profiling and demand estimation, 2) appliances profiling and fault detection, and 3) inference on sensors. Solutions in each category are presented, compared, and discussed, as well as open perspectives and research trends. Different classifications in each category are given to structure the presentation. Compared to related state-of-the-art survey papers, the contribution in the current paper is to provide a comprehensive and holistic review from the ML perspectives rather than architectural and technical aspects of existing building management systems, and by considering all types of ML tools, buildings, and several categories of applications. The paper ends with a summary discussion of the presented works, with focus lessons learned, challenges, open and future directions of research in this area.
VANET, as an essential component of the intelligent transport system, attracts more and more attention. As multifunction nodes being capable of transporting, sensing, information processing, wireless communication, vehicular nodes are more vulnerable to the worm than conventional hosts. The worm spreading on vehicular networks not only seriously threatens the security of vehicular ad hoc networks but also imperils the onboard passengers and public safety. It is indispensable to study and analyze the characteristics of worm propagating on VANETs. In this paper, we first briefly introduced the computer worms and then surveyed the recent literature on the topic of the worm spreading on VANETs. The models developed for VENATs worm spreading and several counter strategies are compared and discussed.
Organizations have been using diverse types of security solutions to prevent cyber-attacks. These solutions are provided by multiple vendors based on heterogeneous technological paradigms. Hence, it is challenging rather impossible to make security solutions to work in unison. Security orchestration aims at connecting multivendor security tools to work as a unified whole that can effectively and efficiently interoperate to support the repetitive job of a security expert. Although security orchestration has gained significant importance by security industries in recent years, no attempt has been taken to systematically review and analyze the existing practices and solutions in this domain. This study aims to provide a comprehensive review of security orchestration to gather a general understanding, drivers, benefits and associated challenges in security orchestration. We have carried out a Multivocal Literature Review (i.e. a type of Systematic Literature Review) to include both academic and grey (blogs, web pages, white paper) literature from January 2007 until July 2017 for this purpose. The results of data analysis and synthesis enable us to provide a working definition of security orchestration and classify the main functionalities of security orchestration into three main areas unification, orchestration and automation. We have also identified the core components of security orchestration.
Cryptographic hash functions are widely used primitives with a purpose to ensure the integrity of data. Hash functions are also utilized in conjunction with digital signatures to provide authentication and non-repudiation services. The SHA has been developed over time by the National Institute of Standards and Technology for security, optimal performance, and robustness. The best-known hash standards are SHA-1, SHA-2, and SHA-3. Security is the most notable criterion for evaluating the hash functions. However, hardware performance of an algorithm serves as a tiebreaker among the contestants when all other parameters (security, software performance, and flexibility) have equal strength. Field Programmable Gateway Array (FPGA) is re-configurable hardware that supports a variety of design options, making it the best choice for implementing the hash standards. In this survey, particular attention is devoted to the FPGA optimization techniques for the three hash standards. The study covers several types of optimization techniques and their contributions to the performance of FPGAs. Moreover, the article highlights the strengths and weaknesses of each of the optimization methods and their influence on performance. We are optimistic that the study will be a useful resource encompassing the efforts carried out on the SHAs and FPGA optimization techniques in a consolidated form.
With the high demand for wireless data traffic, WiFi networks have a very rapid growth because they provide high throughput and are easy to deploy. Recently, there are many papers using WiFi for different sensing applications. This survey presents a comprehensive review of WiFi sensing applications from more than 140 papers. The survey groups different WiFi sensing applications into three categories: detection, recognition, and estimation. Detection applications try to solve binary classification problems, recognition applications aim at multi-class classifications problems, and estimation applications try to get the quantity values of different tasks. Different WiFi sensing applications have different requirements of signal processing techniques and classification/estimation algorithms. This survey gives a summary of signal processing techniques and classification/estimation algorithms that are widely used for WiFi sensing applications. The survey also presents the future WiFi sensing trends: integrating cross-layer network stack information, multi-device cooperation, and fusion of different sensors. These WiFi sensing technologies help enhance the existing WiFi sensing capabilities and enable new WiFi sensing opportunities. The targets of future WiFi sensing may go beyond from humans to environments, animals, and objects.
Stylometry consists in text classification based on writing style, and exhibits a two-fold relation to information security. Among its suggested applications has been the detection of deception, which could provide an important asset for e.g. forum moderators or law enforcement. On the other hand, author deanonymization constitutes a privacy threat, the mitigation of which requires obfuscating the original text's style. The literature on both topics is surveyed, concluding that deception is not detectable by stylistic features in the same way as authorship. Further, suggested methods for automatic style obfuscation are deemed inadequate for mitigating large-scale author deanonymization.
Volunteer Computing is a kind of distributed computing that harnesses the aggregated spare computing resources of volunteer devices. It provides a cheaper and greener alternative computing infrastructure that can complement the dedicated, centralized, and expensive data centers. The aggregated idle computing resources of computers are being utilized to provide the much-needed computing infrastructure for compute-intensive tasks such as scientific simulations and big data analysis. However, the use of Volunteer Computing is still dominated by scientific applications and only a very small fraction of the potential volunteer nodes are participating. This paper provides a comprehensive survey of Volunteer Computing, covering key technical and operational issues such as security, task distribution, resource management, and incentive models. The paper also presents a taxonomy of Volunteer Computing systems, together with discussions of the characteristics of specific systems in each category. In order to harness the full potentials of Volunteer Computing and make it a reliable alternative computing infrastructure for general applications, we need to improve the existing techniques and device new mechanisms. Thus, this paper also sheds light on important issues regarding the future research and development of Volunteer Computing systems with the aim of making them a viable alternative computing infrastructure.
Event Stream Processing (ESP) has evolved as the leading paradigm to process low-level event streams in order to gain high-level information that is valuable to applications, e.g., in the Internet of Things. An ESP system is a distributed middleware that deploys a network of operators between event sources, such as sensors, and the applications. ESP systems typically face intense and highly dynamic data streams. To handle these streams, parallelization and elasticity are important properties of modern ESP systems. The current research landscape provides a broad spectrum of methods for parallelization and elasticity in ESP where each of them comes with specific assumptions and a specific focus on particular aspects of the problem. However, the literature lacks a comprehensive overview and categorization of the state of the art in ESP parallelization and elasticity, which is necessary to consolidate the state of the research and to plan future research directions on this basis. Therefore, in this survey, we study the literature and develop a classification of current methods for both parallelization and elasticity in ESP systems. Further, we summarize our classification in decision trees that help users to more easily find the methods that fit best to their specific needs.
The gap between the speed of the memory systems and processors has motivated large bodies of work on hiding or lessening the delay of memory accesses. Data prefetching is a well-known and widely-used approach to hide the data access latency. It has been shown that data prefetching is able to significantly improve the performance of processors by overlapping computation with data delivery. There are a wide variety of prefetching techniques where each one is suitable for a particular class of workloads. This survey analyzes the state-of-the-art hardware data prefetching techniques and sheds light on their design trade-offs. Moreover, we quantitatively compare state-of-the-art prefetching techniques for accelerating server workloads. To have a fair comparison, we choose a target architecture based on a contemporary server processor and stack competing prefetchers on top of it. For each prefetching technique, we thoroughly evaluate the performance improvement along with the imposed overheads. The goal of this survey is to shed light on the status of the state-of-the-art data prefetchers and motivate further work on improving data prefetching techniques.
Understanding visual interestingness is a challenging task which has been addressed by researchers in various disciplines from humanities and psychology to, more recently, computer vision and multimedia. Automatic systems are more and more needed in order to help users navigate through the growing amount of visual information available, either on the web or our personal devices, for example by selecting relevant and interesting content. Previous studies indicate that visual interest is highly related to concepts like arousal, unusualness or complexity, where these connections are found either based on psychological theories, user studies or computational approaches. However, the link between visual interestingness and other related concepts has been partially explored so far, for example by considering only a limited subset of covariates at a time. In this paper, we propose a comprehensive survey on visual interestingness and related concepts, aiming to bring together works based on different approaches, highlighting controversies and identifying links which have not been fully investigated yet. Finally, we present some open questions that may be addressed in future works.
Binary rewriting is changing the semantics of a program without having the source code at hand. It is used for diverse purposes as emulation (e.g., Qemu), optimization (e.g., DynInst), observation (e.g., Valgrind) and hardening (e.g., SecondWrite). This survey gives detailed insight into the development and state-of-the-art in binary rewriting by reviewing 56 publications from 1992 up to 2018. First, we provide an in-depth investigation of the challenges and respective solutions of the steps to successful binary rewriting. Based on our findings we establish a thorough categorization of binary rewriting approaches with respect to their use-case, applied analysis technique, code-transformation method and code generation techniques. Furthermore, we contribute a comprehensive mapping between binary rewriting tools, applied techniques and their domain of application. Our findings emphasize that although much work has been done over the last decades, most of the effort was put into improvements aiming at the x86 architecture ignoring other instruction set architectures like ARM or MIPS. This is of special interest as these kind of architectures are often used in the emerging field of the Internet of Things. To the best of our knowledge, our survey is the first comprehensive overview on the complete binary rewriting process.
Model comparison has been widely used to support many tasks in model-driven software development. For this reason, many comparison techniques have been proposed in the last decades. However, academia and industry have overlooked the production of a panoramic view of the current literature. Hence, a thorough understanding of the state-of-the-art techniques remains limited and inconclusive. This article, therefore, focuses on providing a classification and a thematic analysis of studies on comparison of software design models. We carried out a Systematic Mapping Study, following well-established guidelines, for answering nine research questions. In total, 55 articles (out of 4132) were selected from ten widely recognized electronic databases after a careful filtering process. The main results are that the majority of the primary studies (1) provide coarse-grained comparison techniques of general-purpose diagrams, (2) adopt graph as the principal data structure and compare software design models considering structural properties only, (3) pinpoint commonalities and differences between software design models, rather than score their similarity, (4) propose new techniques, whereas neglect the production of empirical knowledge from experimental studies, and (5) propose automatic techniques without demonstrating their effectiveness. Finally, this article highlights some challenges and further directions that might be explore in upcoming studies.
Many scientists use scripts for designing experiments since script languages deliver sophisticated data structures, simple syntax, and easiness to obtain results without spending much time on designing systems. While scripts provide adequate features for scientific programming, they fail to guarantee the reproducibility of experiments, and they present challenges for data management and understanding. These challenges include, but are not limited to: understanding each trial (experiment execution); connecting several trials to the same experiment; tracking the difference between these trials; and relating results to the experiment inputs and parameters. Such challenges can be addressed with the help of provenance and multiple approaches have been proposed with different techniques to support collecting, managing, and analyzing provenance in scripts. In this work, we propose a classification taxonomy for the existing state-of-the-art techniques and we classify them according to the proposed taxonomy. The identification of state-of-the-art approaches followed an exhaustive protocol of forward and backward literature snowballing.
Deep neural networks have proven to be particularly effective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardware-oriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy efficiency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-efficient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussions of their effectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. This article represents the first survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the field.
Biometric research is directed increasingly towards Wearable Biometric Systems (WBS) for user authentication and identification. However, prior to engaging in WBS research, how their operational dynamics and design considerations differ from those of Traditional Biometric Systems (TBS) must be understood. While the current literature is cognizant of those differences, there is no effective work that summarizes the factors where TBS and WBS differ, namely, their modality characteristics, performance, security and privacy. To bridge the gap, this paper accordingly reviews and compares the key characteristics of modalities, contrasts the metrics used to evaluate system performance, and highlights the divergence in critical vulnerabilities, attacks and defenses for TBS and WBS. It further discusses how these factors affect the design considerations for WBS, the open challenges and future directions of research in these areas. In doing so, the paper provides a big-picture overview of the important avenues of challenges and potential solutions that researchers entering the field should be aware of. Hence, this survey aims to be a starting point for researchers in comprehending the fundamental differences between TBS and WBS before understanding the core challenges associated with WBS and its design.
Group key agreement (shorten as GKA) protocol enables a group of users to negotiate a one-time session key and protect the thereafter group-oriented communication with this session key across an unreliable network. The number of communication rounds is one of the main concern for practical applications where the cardinality of group participants involved is considerable. It is critical to have fixed constant rounds in GKA protocols to secure these applications. In light of overwhelming variety and multitude of constant-round GKA protocols, this paper surveys these protocols from a series of perspectives to supply better comprehension for researchers and scholars. Concretely, this article captures the state-of-the-art of constant-round GKA protocols by analyzing the design rationale, examining the framework and security model, and evaluating all discussed protocols in terms of efficiency and security properties. In addition, this article discusses the extension of constant-round GKA protocols including dynamic membership updating, password-based, affiliation-hiding and fault-tolerance. In conclusion, this article also points out a number of interesting future directions.
Distributed Denial of Service attack (DDoS) is recognized to be one of the catastrophic attacks against various digital communication entities. Software-defined networking (SDN) is an emerging technology for computer networks that uses open protocols for controlling switches and routers placed at the network edges by using specialized open programmable interfaces. In this paper, a detailed study on DDoS threats prevalent in SDN is presented. Firstly, SDN features are examined from the perspective of security, and then, a discussion on assessment of SDN security features is done. Further, two viewpoints towards protecting the networks against DDoS attacks are elaborated. In the first view, SDN utilizes its abilities to secure the conventional networks. In the second view, SDN may become a victim of the threats itself because of the centralized control mechanism. The main focus of this research work is towards discovering critical security implications in SDN while reviewing the current ongoing research studies. By emphasizing the available state of the art techniques, an extensive review towards the advancement of the SDN security is provided to the researchers and IT communities.
Virtualization works as an underlying technology behind the success of cloud computing. It runs multiple operating systems simultaneously by means of virtual machine. Through virtual machine live migration, virtualization efficiently manages resources within cloud datacenter with the minimum service interruption. Precopy and postcopy are the traditional techniques of virtual machine memory live migration. Out of these two techniques, precopy is widely adopted due to its reliability in terms of destination side crash. A large number of migrations take place within the datacenters for resource management purpose. Virtual machine live migration a?ects the performance of virtual machine as well as system performance, hence it needs to be efficient. In this paper, several precopy based methods for efficient virtual machine memory live migration are properly classified and discussed. The paper compares these methods on several parameters like their methods, goals, limitations, performance parameter evaluated, virtualization platform used, and workload used. Further, the paper also shows the analytical comparison between di?erent virtualized benchmark platforms to understand the implementation aspects and some open areas related to VM live migration with their issues.
Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. Specifically, we present trends in DNN architectures and the resulting implications on parallelization strategies. We discuss the different types of concurrency in DNNs; synchronous and asynchronous stochastic optimization; distributed system architectures; communication schemes; and performance modeling. Based on these approaches, we extrapolate potential directions for parallelism in deep learning.
Signals obtained from a patient i.e., bio-signals, can be utilized to analyze the health of the patient. One such bio-signal is the Electrocardiogram (ECG), which is vital and represents the functioning of the heart. Any abnormal behavior in the ECG signal is an indicative measure of malfunctioning of the heart termed as arrhythmia condition. Due to the involved complexities such as lack of human expertise and high probability to misdiagnose, long-term monitoring based on computer-aided diagnosis (CADiag) is preferred. There exist various CADiag techniques for arrhythmia diagnosis with their own benefits and limitations. In this article, we classify arrhythmia detection approaches that make use of CADiag based on the utilized technique. A vast number of techniques useful for arrhythmia detection, their performances, involved complexities and comparison among different variants of same technique and across different techniques are discussed.
Blockchain offers a totally different approach to storing information, making transactions, performing functions, and establishing trust in an open environment. Many consider blockchain as a technology breakthrough for cryptography and cybersecurity, with use cases ranging from globally deployed cryptocurrency systems like Bitcoin, to smart contracts, smart grids over the Internet of Things, and so forth. Although blockchain has received growing interests in both academia and industry over the past five years, the security and privacy of blockchains continue to be at the center of the debate when deploying blockchain in different applications. This paper presents a comprehensive overview of the security and privacy of blockchain. To facilitate the discussion, we first describe the concept of blockchains for online transactions. Then we describe the basic security properties that are inherent in Bitcoin like cryptocurrency systems and the additional security properties that are desired in many blockchain applications. Finally, we review the security and privacy techniques for achieving these security properties in blockchain-based systems. We conjecture that this survey may help readers to gain an in-depth understanding of the security and privacy of blockchain with respect to concept, attributes, techniques and systems.
The emergence of the Internet of things and the wide spread deployment of diverse computing systems have led to the formation of heterogeneous multi-agent systems (MAS) to complete a variety of tasks. Motivated to highlight the state of the art on existing MAS while identifying their limitations, remaining challenges and possible future directions, we survey recent contributions to the field.We focus on robot agents and emphasize the challenges of MAS sub-fields including task decomposition, coalition formation, task allocation, perception, and multi-agent planning and control. While some components have seen more advancements than others, more research is required before effective autonomous MAS can be deployed in real smart city settings that are less restrictive than the assumed validation environments of MAS. Specifically, more autonomous end-to end solutions need to be experimentally tested and developed while incorporating natural language ontology and dictionaries to automate complex task decomposition and leveraging big data advancements to improve perception algorithms for robotics.
Internet of Things (IoT) devices are gaining momentum as mechanisms to authenticate the porting user. Then, it is critical to ensure that such user is not impersonated at any time. This need is known as Continuous Authentication (CA). Since 2007, a plethora of IoT-based CA academic research and industrial contributions have been proposed. We offer a comprehensive overview of 62 research papers regarding the main components of a CA system. The status of the industry is studied as well, covering 37 market contributions, research projects and related standards. Lessons learned to foster further research in this area are finally presented.
The survey aims to present the state of the art in analytic communication performance models, providing sufficiently detailed description of particularly noteworthy efforts. Modeling the cost of communications in computer clusters is an important and challenging problem. It provides insights into the design of the communication pattern of parallel scientific applications and mathematical kernels and sets a clear ground for optimization of their deployment in the increasingly complex HPC infrastructure. The survey provides background information on how different performance models represent the underlying platform and shows the evolution of these models over time, since early clusters of single core processors to present-day multi-core and heterogeneous platforms. Perspective directions for future research in the area of analytic communication performance modeling conclude the survey.
Insider threats are one of today's most challenging cybersecurity issues that are not well addressed by commonly employed security solutions. Despite several scientific works published in this domain, we argue that the field can benefit from the proposed structural taxonomy and novel categorization of research. The objective of our categorization is to systematize knowledge in insider threat research, while leveraging existing grounded theory method for rigorous literature review. The proposed categorization depicts the workflow among particular categories that include: 1) Incidents and datasets, 2) Analysis of attackers, 3) Simulations, and 4) Defense solutions. Our survey will enhance researchers' efforts in the domain of insider threat, because it provides: a) a novel structural taxonomy that contributes to orthogonal classification of incidents and defining the scope of defense solutions employed against them, b) an updated overview on publicly available datasets that can be used to test new detection solutions against other works, c) references of existing case studies and frameworks modeling insiders' behaviors for the purpose of reviewing defense solutions or extending their coverage, and d) a discussion of existing trends and further research directions that can be used for reasoning in the insider threat domain.
The emergent context-aware applications in ubiquitous computing demands for obtaining accurate location information of humans or objects in real-time. Indoor location-based services can be delivered through implementing different types of technology among which is a recent approach that utilizes LED lighting as a medium for Visible Light Communication (VLC). The ongoing development of solid-state lighting (SSL) is resulting in the wide increase of using LED lights and thereby building the ground for a ubiquitous wireless communication network from lighting systems. Considering the recent advances in implementing Visible Light Positioning (VLP) systems, this paper presents a review of VLP systems and focuses on the performance evaluation of experimental achievements on location sensing through LED lights. We have outlined the performance evaluation of different prototypes by introducing new performance metrics, their underlying principles, and their notable findings. Furthermore, the study synthesizes the fundamental characteristics of VLC-based positioning systems that need to be considered, presents several technology gaps based on the current state-of-the-art for future research endeavors, and summarizes our lessons-learned towards the standardization of the performance evaluation.
The Information-centric Network paradigm is a Future Internet approach aiming to tackle the Internet architectural problems and inefficiencies by swapping the main entity of the network architecture from hosts to content items. This paradigm change potentially enables a future Internet with better performance, reliability, scalability, and suitability for wireless and mobile communication. It also provides new intrinsic means to deal with some popular attacks on the Internet architecture, such as denial of service. However, this new paradigm also represents new challenges related to security that need to be addressed, to ensure its capability to support current and future Internet requirements. This paper surveys and summarizes ongoing research concerning security aspects of information-centric networks, discussing vulnerabilities, attacks, and proposed solutions to mitigate them. We also discuss open challenges and propose future directions regarding research in information-centric networks security.
Recent global smart city efforts resemble the establishment of electricity networks when electricity was first invented, which meant the start of a new era to sell electricity as a utility. A century later, in the smart era, the network to deliver services goes far beyond a single entity like electricity. Supplemented by a well-established internet infrastructure that can run an endless number of applications, abundant processing and storage capabilities of cloud, resilient edge-computing, and sophisticated data analysis like machine learning and deep learning, an already-booming Internet of Things (IoT) movement makes this new era far more exciting. In this article, we present a multi-faceted survey of machine intelligence in modern implementations. We partition smart city infrastructure into application, sensing, communication, security, and data planes and put an emphasis on the data plane as the mainstay of computing and data storage. We investigate i) a centralized and distributed implementation of data plane's physical infrastructure and ii) a complementary application of data analytics, machine learning, deep learning, and data visualization to implement robust machine intelligence in a smart city software core. We finalize our paper with pointers to open issues and challenges.
Anomaly detection has attracted many applications in diverse research areas. In network security, it has been widely used for discovering network intrusions and malicious events. Detection of anomalies in quantitative data has received a considerable attention in the literature and has a venerable history. By contrast, and despite the widespread use of categorical data in practice, anomaly detection in categorical data has received relatively little attention. This is because detection of anomalies in categorical data is a challenging problem. One such a challenge is that anomaly detection techniques usually depend on identifying representative patterns then measuring distances between objects and these patterns. However, identifying patterns and measuring distances are not easy in categorical data. Fortunately, several papers focussing on the detection of anomalies in categorical data have been published in the recent literature. In this article, we provide a comprehensive review of the research on anomaly detection problem in categorical data. We categorize existing algorithms into different approaches based on the conceptual definition of anomalies they use. For each approach, we survey anomaly detection algorithms, and then show the similarities and differences among them.