A Survey on Malicious Domains Detection through DNS Data Analysis
Three-dimensional data are increasingly prevalent across biomedical and social domains. Notable examples are gene-sample-time, individual-feature-time or node-node-time data, generally referred observation-attribute-context data. The unsupervised analysis of three-dimensional data can be pursued to discover putative biological modules, disease progression patterns or communities of individuals with coherent behavior, thus being key to enhance the understanding of complex biological, individual and societal systems. Although clustering can be applied to group observations, it is of limited potential since observations in three-dimensional data domains are typically only meaningfully correlated on subspaces of the overall space. Biclustering tackles this challenge but disregards the third dimension of data. In this context, triclustering -- the discovery of coherent subspaces within three-dimensional data -- has been largely researched to tackle these problems. Despite the diversity of contributions in this field, there is still lacking a structured view on the major requirements of this task, allowed homogeneity criteria (including coherency, structure, quality, locality and orthonormality criteria) and algorithmic approaches. In this context, this work formalizes the triclustering task and its scope; introduces a taxonomy to categorize the contributions on the field; provides a comprehensive comparison of state-of-the-art triclustering algorithms according to their behavior and output; and lists relevant real-world applications.
Context such as user's search history, demographics, devices and surroundings, has become prevalent in various domains of information seeking and retrieval such as mobile search, task-based search and social search. While evaluation is central and has a long history in information retrieval, it faces the big challenge of designing an appropriate methodology that embeds the context into the evaluation settings. In this survey, we summarize in a unified view, a wide range of main and recent progress in contextual information retrieval evaluation that leverages diverse context dimensions and uses different principles, methodologies and levels of measurements. More specifically, this survey aims to fill two main gaps in the literature: first, it provides a critical summary and comparison of existing contextual information retrieval evaluation methodologies and metrics according to a simple stratification model; second, it points out the impact of context dynamicity and data privacy on the evaluation design. Finally, we recommend promising research directions for future investigations.
The increasing amount of contemporary Linux applications in a data center often generate large quantity of real-time system call traces, which are not well-suited for traditional host-based intrusion detection system deployed on each single host. Training and testing data mining models with system call traces on a single host that has static computing and storage capacity is time-consuming and intermediate datasets are not capable to be handled. It is cumbersome for the maintenance and update of HIDS installed on each physical or virtual host and distributed system call analysis can hardly be performed to detect complicated and distributed attacks among multiple hosts. This paper provides a review of the development of system call based HIDS as well as future research directions. Algorithms and techniques relevant to system call based HIDS are evaluated, including feature extraction methods and various data mining algorithms. Modern application areas for system call based HIDS are summarized and related works are investigated. The HIDS dataset issues including current available datasets with system calls and approaches for researchers to generate their own datasets are listed. Potential HIDS solutions in regard to cloud computing and big data tools are provided in this paper.
Brainwaves, which reflect brain electrical activity and have been studied for a long time in the domain of cognitive neuroscience, have recently been proposed as a promising biometric approach due to their unique advantages of confidentiality, resistance to impersonation, sensitivity to emotional and mental state, continuous nature, and cancellability. Recent research efforts have explored many possible ways of using brain biometrics and demonstrated that they are a promising candidate for more robust and secure person identification and authentication. Although existing research on brain biometrics have obtained some intriguing insights, a lot of work is still necessary to achieve a reliable ready-to-deploy brain biometric system. This article aims to provide a detailed survey of the current literature and outline the scientific work conducted on brain biometric systems. It provides an up-to-date review of state-of-the-art acquisition, collection, processing, and analysis of brainwave signals, publicly available databases, feature extraction and selection, and classifiers. Furthermore, it highlights some of the emerging open research problems for brain biometrics, including multimodality, security, permanence, and stability.
Large Scale Ontology Matching: State of the Art Analysis
Stress is a major concern in daily life that imposes significant and growing health and economic costs on society every year. Stress and driving are a dangerous combination which can lead to life-threatening situations as a large number of road traffic crashes occur every year due to driver stress. In addition, the rate of many general health issues caused by work-related chronic stress in drivers who work in public and private transport is greater than many other occupational groups. Therefore, an early warning system for drivers stress level in car is needed to continuously predict dangerous driving situations and alert the driver pro-actively from the perspective of safety and comfortable driving. With recent developments in ambient intelligence, such as sensing technologies, pervasive devices, context recognition, and communications, it is becoming feasible to comfortably measure combinations of different sensed modalities to recognise driver stress automatically. This survey reviews the most recent researches on automatic driver stress level detection domain based on different sensors and data. Different computational techniques which have been used in this domain for data analysis are investigated. The important methodological issues that hinder the implementation of such a system are discussed and future research directions are offered.
A recent trend both in academia and industry is to explore the use of deception techniques to achieve proactive attack detection and defense to the point of marketing intrusion deception solutions as zero-false-positive intrusion detection. However, there is still a general lack of understanding of deception techniques from a research perspective and it is not clear how the effectiveness of these solution can be measured and compared with other security approaches. To shed light on this topic, we introduce a comprehensive classification of existing solutions, and survey the current application of deception techniques in computer security. Furthermore, we analyze several open research directions, including the design of strategies to help defenders to design and integrate deception within a target architecture, the study of automated ways to deploy deception in complex systems and, most importantly, the design of new techniques and experiments to evaluate the effectiveness of the existing deception techniques. Finally, we discuss the limitations of existing solutions and provide insights for further research on this topic.
Reproducibility is widely considered to be an essential requirement of the scientific process. However, a number of serious concerns have been raised recently, questioning whether today's computational work is adequately reproducible. In principle, it should be possible to specify a computation to sufficient detail that anyone should be able to reproduce it exactly. But in practice, there are fundamental, technical, and social barriers to doing so. The many objectives and meanings of reproducibility are discussed within the context of scientific computing. Many technical barriers to reproducibility are described, extant approaches surveyed, and open areas of research are identified.
Testing is one of the most important phase in the development of any product or software. Various types of software testing exists which has to be done to meet the need of the software. Regression testing is one of the crucial phases of testing where testing of program is done for the original test build along with the modifications. Very few approaches and methodologies have been found which provides the real tool for test case generation. There is a requirement of a tool which can accept requirements, generate test cases for first version and then for versions with the change. Various studies proposed by the authors have been analyzed focusing on test cases generation and their approach towards web application. We have provided the detailed study of Regression Test Case Generation and its approach towards web application. It has been found that there is a need of an Automated Regression tool which can generate Regression test cases based on user requirements directly taken by a tool and the inputs required for testing it. These test cases have to be generated and implemented by the tool so that the reduction in the overall effort and cost can be achieved.
Cloud computing has been regarded as an emerging approach to provisioning resources and managing applications. It provides attractive features, such as on-demand model, scalability enhancement, and management costs reduction. However, cloud computing systems continue to face problems such as hardware failures, overloads caused by unexpected workloads, or the waste of energy due to inefficient resource utilization, which all result to resource shortages and application issues such as delays or saturated eventually. A paradigm named brownout has been applied to handle these issues by adaptively activating or deactivating optional parts of applications or services to manage resource usage in cloud computing system. Brownout has successfully shown it can avoid overloads due to changes in the workload and achieve better load balancing and energy saving effects. This paper proposes a taxonomy of brownout approach for managing resources and applications adaptively in cloud computing systems and carries out a comprehensive survey. It identifies open challenges and offers future research directions.
Recommender systems are one of the most successful applications of data mining and machine learning technology in practice and significant technological advances were made over the last two decades. Academic research in the field in the recent past was strongly fueled by the increasing availability of large datasets containing user-item rating matrices. Many of these works were therefore based on a problem abstraction where only one single user-item interaction is considered in the recommendation process. In many application domains, however, multiple user-item interactions of different types can be recorded over time. And, a number of recent works has shown that this information can be used to build richer individual user models and to discover additional behavioral patterns that can be leveraged in the recommendation process. In this work we review existing works that consider information from such sequentially-ordered user-item interaction logs when recommending. In addition, we discuss problem settings where the sequence in which items can be recommended is subject to strict or weak ordering constraints. We propose a categorization of the corresponding recommendation tasks and goals, summarize existing algorithmic solutions, discuss methodological approaches when benchmarking what we call recommender systems, and outline open challenges in the area.
Malicious software are still threatening users on a daily basis and their evolution goes from social- engineering-based bankers to advanced persistent threats (APTs). Recent research and discoveries have presented us to a wide range of anti-analysis and evasion techniques, in-memory attacks, such as Returned Oriented Programming (ROP), and systems subversion, including BIOS and hypervisors. This work presents a survey on techniques able to detect, mitigate and analyze these kinds of attacks, which require transparent and fine-grained environments as analysis resources. We cover current tools limitations, such as not being fully-transparent, and introduce systems and techniques to overcome and/or mitigate these constraints. The work presents approaches based on hypervisor introspection, System Managment Mode (SMM) instrumen- tation as well as some hardware-based ones. We also present some threats based on the same techniques. Our main goal is to give to the reader a broader and more comprehensive understanding of recently-surfaced tools and techniques.
The design of usable and useful products and services is supported by an understanding of users needs and experiences. For designers of products and services of every kind, engaging users is a priority. However, to date, there has been limited analysis of how HCI and computer science research deals with engagement. Questions persist concerning the conception, abstraction, and measurement of this concept. This paper presents a systematic review of engagement, drawing upon a final corpus of 351 papers and 102 unique definitions of engagement. We describe the diversity of interpretation, theory, and measurement of engage- ment, along with strategies for the design of engaging experiences. We map the current state of engagement research, discuss the value of the concept and its relationship to other popular terms, and present a set of guidelines and opportunities for future research.
In an ultra large scale storage system, data is distributed on multiple nodes. The access to this distributed data is through its metadata, maintained by multiple metadata servers. This Metadata has information about the physical position of distributed data and access privileges. The efficiency of a storage system depends upon the effective management of such metadata. Inefficient metadata management may lead to higher lookup time and large memory overheads. In this paper an extensive systematic literature review of scalable metadata management techniques in general and a very large distributed storage system in particular. The different metadata distribution techniques lead to different taxonomies. The evaluation of these techniques has been carried out for metadata management. Metadata management, its benets and challenges in different techniques are reported. Further, it brings much awareness in potential benets of metadata management in large distributed storage system, and identify the need to develop efficient metadata management techniques. The paper investigates the techniques based on parameters such as metadata distribution, in-memory caching, load balancing, migration cost, lookup time, memory overheads, metadata operation costs, scalability, availability and locality etc. Finally, it discusses the existing challenges for researchers to stay abreast and for carrying out research in the metadata management techniques.
Pointwise anomaly detection and change detection focus on the study of individual data instances however an emerging area of research involves groups or collections of observations. From applications of high energy particle physics to healthcare collusion, group deviation detection techniques result in novel research discoveries, mitigation of risks, prevention of malicious collaborative activities and other interesting explanatory insights. In particular, static group anomaly detection is the process of identifying groups that are not consistent with regular group patterns while dynamic group change detection assesses significant differences in the state of a group over a period of time. Since both group anomaly detection and group change detection share fundamental ideas, this survey paper provides a clearer and deeper understanding of group deviation detection research in static and dynamic situations.
Register allocation (assigning variables to processor registers or memory) and instruction scheduling (reordering instructions to increase throughput) in a compiler are essential tasks for generating efficient assembly code. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can generate optimal code, can accurately capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This survey reviews combinatorial optimization for register allocation and instruction scheduling. It focuses on integer programming, constraint programming, and partitioned Boolean quadratic programming as combinatorial techniques that are used in the area, are based on models, and can generate provably optimal code. A detailed, multidimensional classification of the surveyed approaches based on optimization technique, scope, model accuracy, and practical scalability enables to critically compare them and to highlight developments, trends, and challenges.
The gap is widening between the processor clock speed of end-system architectures and network throughput capabilities. It is now physically possible to provide single-flow throughput of speeds up to 100 Gbps, and 400 Gbps will soon be possible. Most current research into high-speed data networking focuses on managing expanding network capabilities within datacenter Local-Area Networks (LANs) or efficiently multiplexing millions of relatively small flows through a Wide-Area Network (WAN). However, datacenter hyper-convergence places high-throughput networking workloads on general-purpose hardware, and distributed High-Performance Computing (HPC) applications require time-sensitive, high-throughput end-to-end flows (also referred to as elephant flows) to occur over WANs. For these applications, the bottleneck is often the end-system, and not the intervening network. Since the problem of the end-system bottleneck was uncovered, many techniques have been developed which address this mismatch with varying degrees of effectiveness. In this survey, we describe the most promising techniques, beginning with network architecturesand NIC design, continuing with operating and end-system architectures, and concluding with clean-slate protocol design.
Contemporary software systems operate in heterogeneous, dynamic, and distributed environments, where security needs changes at runtime. The security solutions for such systems need to be adaptive for the continuous satisfaction of the security goals of software systems. Whilst existing research on self-adaptive security has made notable advancement towards designing and engineering self-adaptive security solutions, there exists little work on taxonomic analysis and understanding of the reported research on architectural level solutions and research gaps. We propose an architecture-centric taxonomy for mapping and comparing the current research and identifying the future research directions in this field. The proposed taxonomy has been used to review the representative work on the architectural characteristics that self-adaptive security systems must maintain for their effective application in large-scale open environments.We reflect on the findings from the taxonomic analysis and discuss the design principles, research challenges and limitations reported in the-state-of-the-art and practice. We outline the directions for the future research on architectural level support for self-adaptive security systems for large-scale open environments.
Since the mid 1980s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (i) selecting the best optimizations and (ii) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.
Legacy encryption systems depend on sharing the key (public or private) between the users, which is the norm for many systems today. However, this approach poses privacy concerns. The users or service providers with the key have exclusive rights on the data. Especially with popular cloud services, the control over the privacy of the sensitive data is lost. Moreover, untrusted servers, providers, cloud operators can keep physically identifying elements of users long after the user ends the relationship with the services. Indeed, Homomorphic Encryption (HE) is a special kind of encryption scheme, which allows any third party to operate on the encrypted data without decrypting it in advance. The first plausible and achievable Fully Homomorphic Encryption (FHE) scheme was introduced by Craig Gentry in 2009. In this article, we survey the literature from the first HE scheme RSA in 1978 to recent improvements in FHE schemes proposed primarily to improve the practicality of Gentrys original scheme. To conclude, implementations and potential applications are shown while discussing about the future directions in the area of HE.
Hate speech detection is defined as the automatic classification of a text message as containing hate speech or not. This survey organizes and describes the current state of the field, by providing a structured overview of previous approaches, including core algorithms, methods, and main features used. This work also discusses the complexity of the concept of hate speech, defined in many platforms and context, and provides a unifying definition. This area has an unquestionably potential for societal impact and the development of shared resources, such as annotated datasets in multiple languages, is a crucial step for its development.
Accurately predicting the occurrence time for the event of interest is a critical problem in longitudinal data analysis. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after some time point or the instances did not experience any event during the monitoring period. Such a phenomenon is called censoring which is typically handled using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to effectively overcome this censoring issue. In addition, many machine learning algorithms are adapted to effectively handle survival data and tackle other challenging problems that arise from real-world data. In this survey, we provide a comprehensive and structured review of the representative statistical methods along with the machine learning techniques used in survival analysis and give a well-organized comprehensive taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and illustrate several successful applications in various real-world application fields. We hope that this paper will provide a better understanding of the recent advances on survival analysis and offer some guidelines on how to apply these approaches for solving new problems that arise in applications with censored data.
Recognizing people by their gait has become more and more popular nowadays, due to the following reasons. First, gait recognition can work well remotely. Second, gait recognition can be done from low resolution videos and with simple instrumentation. Third, gait recognition can be done without the cooperation of individuals. Fourth, gait recognition can work well while other features such as face and fingerprint are hidden. Finally, gait features are typically difficult to be impersonated. Recent ubiquity of smartphones that capture gait patterns through accelerometers and gyroscope and advances in machine learning has opened new research directions and applications in gait recognition. A timely survey that addresses current advances is missing. In this article, we survey research works in gait recognition. In addition to recognition based on video, we address new modalities, such as recognition based on floor sensor, radar, and accelerometer; new approaches that include machine learning methods; and examine challenges and vulnerabilities in this field. In addition, we propose a set of future research directions. Our review reveals the current state-of-art, and can be helpful to both experts and newcomers of gait recognition. Moreover, it lists future works and publicly available databases in gait recognition for researchers.
Deep learning is a machine learning technique that uses multiple layers to represent the abstractions of data to build computational models. The field of machine learning is witnessing its golden era as deep learning slowly becomes the leader in this domain. The deep learning algorithms and techniques have already enabled several discoveries in video, image, audio, text, etc. to such an extent that our perception of information processing and communication has completely changed. However, there exists an aperture of understanding behind this tremendously fast paced domain, because it was never quite represented from a multi-scope perspective previously. Thus, this article presents a comprehensive review of historical and recent state-of-the-art approaches in visual, speech, and audio processing, social network analysis, and natural language processing; followed by the in-depth analysis on pivoting and groundbreaking recent advances in deep learning applications. Moreover, deep learning has repeatedly perceived as a silver bullet to all stumbling blocks in machine learning. Therefore, it was undertaken to also review the issues faced in deep learning such as unsupervised learning, black box models, and online learning, and a case was made on how these challenges can be transformed into prolific future research avenues.
Computational creativity seeks to understand computational mechanisms that can be characterized as creative. Creation of new concepts is a central challenge for any creative system. In this paper, we outline different approaches to concept creation and then review conceptual representations relevant to concept creation. The conceptual representations are organized in accordance with two important perspectives on the distinctions between them. One distinction is between symbolic, spatial and connectionist representations. The other is between descriptive and procedural representations. These two distinctions are orthogonal. Additionally, conceptual representations used in particular creative domains, i.e. language, music, image and emotion, are reviewed separately. For each representation reviewed, we cover the inference it affords, the computational means of building it, and its application in concept creation.
Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit code's abundance of patterns. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities.
Cyber attacks are increasingly menacing businesses. Based on literature review and publicly available reports, this paper develops a comprehensive and systematic framework of the cybercrime business. A value chain model is constructed and used to describe 25 key value-added activities, which can be offered "as a service" for use in a cyber attack. Understanding the specialization, commercialization, and cooperation of services for cyber attacks helps to anticipate emerging cyber attack services. Finally, this framework can help to build a more cyber immune system by targeting cybercrime control-points and assigning defense responsibilities to encourage collaboration.
Monitoring the ``physics'' of control systems to detect attacks is a growing area of research. In its basic form a security monitor creates time-series models of sensor readings for an industrial control system and identifies anomalies in these measurements in order to identify potentially false control commands or false sensor readings. In this paper, we review previous work on physics-based anomaly detection based on a unified taxonomy that allows us to identify limitations and unexplored challenges, and propose new solutions.
Mulsemedia - multiple sensorial media - makes possible the inclusion of layered sensory stimulation and the interaction through multiple sensory channels. e recent upsurge in technology and wearables provides mulsemedia researchers a vehicle for boundless choice. However, in order to build systems that integrate various senses there are still some issues that need to be addressed. This review deals with mulsemedia topics remained insufficiently explored by previous work with a focus on multi-multi (multiple media - multiple senses) perspective, where multiple types of media engage multiple senses. Moreover, it addresses the evolution of previously identified challenges in this area and it formulates new exploration directions.
The Internet of Things (IoT) envisions a world-wide, interconnected network of smart physical entities. These physical entities generate a large amount of data in operation and as the IoT gains momentum in terms of deployment, the combined scale of those data seems destined to continue to grow. Increasingly, applications for the IoT involve analytics. Data analytics is the process of deriving knowledge from data, generating value like actionable insights from them. This article reviews work in the IoT and big data analytics from the perspective of their utility in creating efficient, effective and innovative applications and services for a wide spectrum of domains. We review the broad vision for the IoT as it is shaped in various communities, examine the application of data analytics across IoT domains, provide a categorisation of analytic approaches and propose a layered taxonomy from IoT data to analytics. This taxonomy provides us with insights on the appropriateness of analytical techniques, which in turn shapes a survey of enabling technology and infrastructure for IoT analytics. Finally, we look at some tradeoffs for analytics in the IoT that can shape future research.
Iris recognition is increasingly used in large-scale applications. As a result, presentation attack detection for iris recognition takes on fundamental importance. This survey covers the diverse research literature on this topic. Different categories of presentation attack are described and placed in an application-relevant framework, and the state of the art in detecting each category of attack is summarized. One conclusion from this is that presentation attack detection for iris recognition is not yet a solved problem. Datasets available for research are described, research directions for the near- and medium-term future, and a short list of recommended readings are suggested.
Intrusion Detection Systems (IDS) are key components for securing critical infrastructures, capable of detecting malicious activities on networks or hosts. However, the efficiency of an IDS depends primarily on both its configuration and its precision. The large amount of network traffic that needs to be analyzed, in addition to the increase in attacks sophistication, renders the optimization of intrusion detection an important requirement for infrastructure security, and a very active research subject. In the state of the art, a number of approaches have been proposed to improve the efficiency of intrusion detection and response systems. In this paper, we review the works relying on decision-making techniques focused on game theory and Markov Decision Processes to analyze the interactions between the attacker and the defender, and classify them according to the type of the optimization problem they address. While these works provide valuable insights for decision making, we discuss the limitations of these solutions as a whole, in particular regarding the hypotheses in the models and the validation methods. We also propose future research directions to improve the integration of game theoretic approaches into IDS optimization techniques.
Dynamic and partial reconfiguration are key differentiating capabilities of field programmable gate arrays (FPGAs). While they have been studied extensively in academic literature, they find limited use in deployed systems. We review FPGA reconfiguration, looking at architectures built for the purpose, and the properties of modern commercial architectures. We then investigate design flows, and identify the key challenges in making reconfigurable FPGA systems easier to design. Finally, we look at applications where reconfiguration has found use, as well as proposing new areas where this capability places FPGAs in a unique position for adoption.
The size of Linked Data is growing fast, thus a Linked Data management system must to be able to deal with increasing amounts of data. Even though physically handling Linked Data using a relational table is possible, querying a giant triple table becomes very costly due to the multiple nested joins required for typical queries. In addition, the heterogeneity of Linked Data poses entirely new challenges to database systems. This article provides a comprehensive study of the state of the art in storing and querying RDF data. In particular, we focus on data storage techniques, indexing strategies, and query execution mechanisms. In addition, we provide a classification of existing systems and approaches. We also provide an overview of the various benchmarking efforts in this context and discuss some of the open problems in this domain.
We, humans, are able to identify other people even in voice disguise conditions. However, we are not immune to all voice changes when trying to identifying people from voice. Likewise, automatic speaker recognition systems can also be deceived by voice imitation and other types of disguise. Taking into account the voice disguise classification into the combination of two different categories (deliberate/non-deliberate and electronic/non-electronic), this survey provides a literature review on the influence of voice disguise in the automatic speaker recognition task and the robustness of these systems to such voice changes. Additionally, the survey addresses existing applications dealing with voice disguise and analyses some issues for future research.
Articular cartilage (AC) is flexible, soft yet stiff tissue that can be visualized and interpreted using magnetic resonance imaging (MRI) for the assessment of knee osteoarthritis (OA). Segmentation of AC from MR images is a challenging task that has been investigated widely. The development of computational methods to segment AC is highly dependent on various image parameters, quality, tissue structure and acquisition protocol involved. This review focuses on the challenges faced during AC segmentation from MR images followed by the discussion on computational methods for semi/fully automated approaches whilst performances parameters and their significances have also been explored. Furthermore, hybrid approaches used to segment AC are reviewed. This review indicates that despite the challenges in AC segmentation, semi-automated method utilizing advanced computational methods such as active contour and clustering have shown significant accuracy. Fully automated AC segmentation methods have obtained moderate accuracy and shows suitability for extensive clinical studies whilst advanced methods are being investigated that has led to achieve significantly better sensitivity. In conclusion, this review indicates that research in AC segmentation from MR images is moving towards the development of fully automated methods using advanced multi-level, multi-data and multi-approach techniques to provide assistance in clinical studies.