Background: Recurrence is an important cornerstone in breast cancer behaviour, intrinsically related to mortality. In spite of its relevance, it is rarely recorded in the majority of breast cancer datasets, which makes research in its prediction more difficult. Objectives: To evaluate the performance of machine learning techniques applied to the prediction of breast cancer recurrence. Material and Methods: Revision of pub- lished works that used machine learning techniques in local and open source databases between 1997 and 2014. Results: The revision showed that it is difficult to obtain a representative dataset for breast cancer recurrence and there is no consensus on the best set of predictors for this disease. High accuracy results are often achieved, yet compromising sensitivity. The missing data and class imbalance problems are rarely addressed and most often the chosen performance metrics are inappropriate for the context. Discussion and Conclusions: Although different techniques have been used, prediction of breast cancer recurrence is still an open problem. The combination of different machine learning techniques, along with the definition of standard predictors for breast cancer recurrence seem to be the main future directions to obtain better results.
The popularity and development of wireless devices has led to a demand for widespread high-speed Inter- net access, including access for vehicles and other modes of high-speed transportation. The current widely deployed method for providing IP services to mobile devices is the mobile Internet Protocol. This includes a handover process for a mobile device to maintain its IP session while it switches between points of access. However, the mobile IP handover causes performance degradation, due to its disruptive latency and high packet drop rate. This is largely problematic for vehicles, as they will be forced to transition between access points more frequently due to their higher speeds and frequent topological changes in vehicular networks. In this paper, we discuss the different mobile IP handover solutions found within related literature and their potential for resolving issues pertinent to vehicular networks. First, we provide an overview of the mobile IP handover and its problematic components. This is followed by categorization and comparison between different mobile IP handover solutions, with an analysis of their benefits and drawbacks.
The security research community has invested significant effort in improving the security of Android applications over the past half decade. This effort has addressed a wide range of problems and resulted in the creation of many tools for application analysis. In this paper, we perform the first systematization of Android security research that analyzes applications, characterizing the work published in more than 17 top venues since 2010. We categorize each paper by the types of problems they solve, highlight areas that have received the most attention and note whether tools were ever publicly released for each effort. Of the released tools, we then evaluate a representative sample to determine how well application developers can apply the results of our community's efforts to improve their products. We find not only that significant work remains to be done in terms of research coverage, but also that the tools suffer from significant issues ranging from lack of maintenance to the inability to produce functional output for applications with known vulnerabilities. We close by offering suggestions on how the community can more successfully move forward.
A survey was conducted to provide a state of the art of authentication and communications security implementations in online banking. Comparisons are made with previous works. Results indicate that between regions the security of SSL/TLS implementations differ greatly, as well as the preference for specific (single or multi-factor) authentication schemes in both home and mobile banking. Based on adoption and technological trends, three phases for online banking development are identified. It is predicted that mobile banking will enter a third phase, characterized by more uniform development of mobile banking applications across different platforms to reduce cost and increase adoption.
Cybercriminal activity has exploded in the past decade, with diverse threats ranging from phishing attacks to botnets and drive-by-downloads afflicting millions of computers worldwide. In response, a volunteer defense has emerged, led by security companies, infrastructure operators and vigilantes. This reactionary force does not concern itself with making proactive upgrades to the cyber infrastructure. Instead, it operates on the front lines by remediating infections as they appear. We construct a model of the abuse reporting infrastructure in order to explain how voluntary action against cybercrime functions today, in hopes of improving our understanding of what works and how to make remediation more effective in the future. We examine the incentives to participate among data contributors, affected resource owners and intermediaries. Finally, we present a series of key attributes that differ among voluntary actions to investigate further through experimentation, pointing toward a research agenda that could establish causality between interventions and outcomes.
Security isolation is a foundation of computing systems that enables resilience to different forms of attacks. This article seeks to understand security isolation by systematizing its many characteristics. We provide a hierarchical classification structure for grouping isolation techniques. At the top level, we consider two principle aspects: mechanism and policy. Each aspect is broken down into salient dimensions that describe key properties. We apply our classification to more than 80 papers that cover a breadth of security isolation techniques and evaluate trade-offs. Finally, we motivate the creation of smart security isolation and highlight the open issues that will enable it.
Cloud computing enables users to provision resources on demand and execute applications in a way that meets their requirements by choosing virtual resources that fit their application resource needs. Then, it becomes the task of cloud resource providers to accommodate these virtual resources onto physical resources. This problem is a fundamental challenge in Cloud Computing as resource providers need to map virtual resources onto physical resources in a way that takes into account the providers' optimization objectives. This paper surveys the relevant body of literature that deals with this mapping problem and how it can be addressed in different scenarios and through different objectives and optimization techniques. The evaluation aspects of different solutions are also considered. The paper aims at both identifying and classifying research done in the area adopting a categorization that can enhance understanding of the problem.
Today, soft errors are one of the major design technology challenges at and beyond the 22 nm technology nodes. This paper introduces the soft error problem from the perspective of processor design. This paper also provides a survey of the existing soft-error mitigation methods across different levels of design abstraction involved in processor design, including the device level, the circuit level, the architectural level, and the program level.
Menus are used for exploring and selecting commands in interactive applications. They are widespread in current systems and used by a large variety of users. As a consequence, they have motivated many studies in Human-Computer Interaction (HCI). Facing the large variety of menus, it is difficult to have a clear understanding of the design possibilities and to ascertain their similarities and differences. In this article, we address a main challenge of menu design: the need to characterize the design space of menus. To do this, we propose a taxonomy of menu properties that structures existing work on visual menus. In order to highlight the impact of the properties on performance, we begin by refining performance through a list of quality criteria and by reviewing existing analytical and empirical methods for quality evaluation. The taxonomy of menu properties is an unavoidable step toward the elaboration of advanced predictive models of menu performance and the optimization of menus. A key point of this work is to focus both on menus and on the properties of menus, to enable a fine-grained analysis in terms of performance.
In the last years, data analysis techniques became more popular and were adopted by several fields of knowledge. As an example, data reduction methods for Wireless Sensor Networks (WSNs) often rely on prediction techniques to reach their goals. Given the constrained energy resources of the sensor nodes, reducing the number of transmissions is a very effective technique when aiming to increase the WSNs lifetime. In this work, we first introduce the traditional methods used for making predictions. Based on their characteristics, we make a complete analysis and categorization of the existing prediction-based data reduction mechanisms that have been considered in WSNs. As an outcome of such an analysis, we present a systematic procedure for selecting the best way to adopt predictions in WSN environments, not only based on the constraints imposed by the WSNs, such as the WSN architecture and the computational power of the deployed nodes, but also on the characteristics of the different prediction methods and the monitored data. Finally, we conclude the paper with a discussion about the future challenges and open research directions in the use of data predictions methods to improve the overall performance of next-generation WSNs.
Evolutionary algorithms (EAs) are robust stochastic optimisers that perform well over a wide range of problems. Their robustness is mainly due to several adjustable parameters, such as mutation rate, crossover rate and population size. Algorithm parameters are usually problem-specific, and often have to be tuned not only to the problem but even the problem instance at hand to achieve optimal performance. In addition, research has shown that different parameter values may be optimal at different stages of the optimisation process. To address these issues, researchers have shifted their focus to adaptive parameter control, where parameter values are adjusted during the optimisation process based on the performance of the algorithm. These methods redefine parameter values repeatedly based on rules which decide how to make the best use of feedback from the optimisation algorithm. In this survey, we investigate the state of the art in adaptive parameter control systematically. The approaches are classified using a new conceptual model which subdivides the process of adapting parameter values into four steps which are present explicitly or implicitly in all existing approaches that tune parameters dynamically during the optimisation process. The analysis reveals the major focus areas of adaptive parameter control research as well as gaps and potential directions for further development in this area.