text.skipToContent text.skipToNavigation
background-image

Pricing Portfolio Credit Derivatives by Means of Evolutionary Algorithms von Hager, Svenja (eBook)

  • Erscheinungsdatum: 08.09.2008
  • Verlag: Gabler
eBook (PDF)
71,39 €
inkl. gesetzl. MwSt.
Sofort per Download lieferbar

Online verfügbar

Pricing Portfolio Credit Derivatives by Means of Evolutionary Algorithms

Svenja Hager aims at pricing non-standard illiquid portfolio credit derivatives which are related to standard CDO tranches with the same underlying portfolio of obligors. Instead of assuming a homogeneous dependence structure between the default times of different obligors, as it is assumed in the standard market model, the author focuses on the use of heterogeneous correlation structures. Dr. Svenja Hager promovierte bei Prof. Dr.-Ing. Rainer Schöbel am Lehrstuhl für Betriebswirtschaftslehre, insbesondere Betriebliche Finanzwirtschaft, der Universität Tübingen. Sie ist als Kredit- und Marktrisiko-Expertin tätig.

Produktinformationen

    Format: PDF
    Kopierschutz: AdobeDRM
    Seitenzahl: 160
    Erscheinungsdatum: 08.09.2008
    Sprache: Englisch
    ISBN: 9783834997029
    Verlag: Gabler
    Größe: 15408 kBytes
Weiterlesen weniger lesen

Pricing Portfolio Credit Derivatives by Means of Evolutionary Algorithms

Chapter 4 Optimization by Means of Evolutionary Algorithms (S. 73-74)

4.1 Introduction

In the preceding Chapter 3, we presented a possible explanation for the inability of the standard market approach to fit quoted CDO tranche prices and to model the correlation smile. We suggested overcoming the deficiency of the standard market model by means of non-flat dependence structures. In the subsequent Chapter 5, we will explain how a correlation matrix can be derived from observed tranche spreads such that all tranche spreads of the CDO structure are reproduced simultaneously. This idea can be represented in the form of an optimization problem. This Chapter 4 addresses optimization algorithms. Life in general and the domain of finance in particular confront us with many opportunities for optimization. Optimization is the process of searching for the optimal solution in a set of candidate solutions, i.e. the search space.

Optimization theory is a branch of mathematics which encompasses many di.erent methodologies of minimization and maximization. In this chapter we represent optimization problems as maximization problems, unless mentioned otherwise. The function to be maximized is denoted as objective function. Optimization methods are similar to approaches to root .nding, but generally they are more intricate. The idea behind root finding is to search for the zeros of a function, while the idea behind optimization is to search for the zeros of the objective function's derivative. However, often the derivative does not exist or is hard to find.

Another di.culty with optimization is to determine whether a given optimum is the global or only a local optimum. There are many di.erent types of optimization problems: they can be one- or multidimensional, static or dynamic, discrete or continuous, constrained or unconstrained. Sometimes even the objective function is unknown. In line with the high number of different optimization problems, many di.erent standard approaches have been developed to finding an optimal solution. Standard approaches are methods that are developed for a certain class of problems (though not speci.cally designed for an actual problem) and that do not use domain-specific knowledge in the search procedure. In case of a discrete search space, the most simple optimization method is the total enumeration of all possible solutions.

Needless to say, this approach .nds the global optimum but is very ineficient especially when the problem size increases. Other approaches like linear or quadratic programming utilize special properties of the objective function. Possible solution techniques for nonlinear programming problems are local search procedures like the gradient-ascent method, provided that the objective function is real-valued and di.erentiable.

Most local search methods take the approach of heading uphill from a certain starting point. They di.er in deciding in what direction to go and how far to move. If the search space is multi-modal (i.e. it contains several local extrema), the local search methods will all run the risk of being stuck in a local optimum. But even if the objective function is not di.erentiable or if the search space is multi-modal, there will still be some standard approaches that deal with these kinds of problems.

Weiterlesen weniger lesen

Kundenbewertungen