• Ei tuloksia

Apendix 1.2‒ Technology developments as a growth factor for financial markets 20

3 Algorithmics of financial markets

3.4 Algorithmic techniques used in computational finance

Roughly speaking, it is possible to classify algorithmic techniques into three main categories:

procedural and evolutionary/agent-based (simulation) techniques.

Financial trading (procedural) algorithmic classes

Regarding trading algorithms, Gyurkó (2010) discusses the key factors that drive the evolution of algorithmic classes (among others, he identifies the impact of changes in regulation, the development of new trading venues, technological innovations, the economic environment, changes in market microstructure, the availability and quality of data/information, and even advances in academic research). Algorithmic classes are considered to be (not necessarily disjoint) sets of systematic trading strategies with similar objectives. One can differentiate between algorithmic

157 Ranco, Gabriele; Aleksovski, Darko; Caldarelli, Guido; Grčar, Miha and Mozetič, Igor. “The Effects of Twitter Sentiment on Stock Price Returns”. PLOS ONE 10(9). September, 2015.

158 Figure credits: Event Study Tools. Available at:

https://www.eventstudytools.com/sites/default/files/pictures/News_Analytics.png.

47 classes 1) by the types or number of assets involved, 2) by the markets the trading is executed on, and 3) by the typical holding period /speed of turnover of capital. Accordingly, the following typology is proposed:

 Electronic trading – trading based on electronic transmission of orders (market share above 95% of equity).

 Algorithmic trading / algorithmic execution – “systematic execution process and optimisation of buy-and-sell decisions once these buy-and-sell decisions have been made by another part of the systematic trading process or by a human portfolio manager”.

 Systematic trading - “computerized trading […] computer systems that process run-time data and make and execute buy-and-sell decisions”. The trade signal generation might be based on statistical arbitrage, event arbitrage, liquidity provision, etc.

 High frequency trading (HFT) – fast turnover of trading capital, typically systematic, algorithmic and electronic (share of HFT was around 35% of equity trading in Europe in 2011).

 Low latency trading – quickly routed and executed, however does not necessarily generate fast turnover of capital.

 Market making – automated liquidity provision, a particular subset of systematic trading.

 Exchange side algorithms – e.g. order routing to dealers and executing trades.

Each of the above classes consist of different sets of strategies. Some might be applicable only on certain types of trading platforms, some are more universal.

Algorithmic execution is used by most of the market participants. While some investors might buy execution services of suppliers (e.g. investment banks), others might develop their own proprietary execution systems. The execution of hedge funds with medium to long term holding periods is not necessarily fully algorithmic, as part of the trades might be still allocated to human traders.159

Finally it is important to recall the fact that producing typologies or classifications of algorithms in order to describe those algorithms in terms of a feature or set of features that is considered significant with respect a certain application framework, is a practice that lack guidelines or patterns as a potential theoretical activity within computer science. Case study 3.1 proposes a set of definitions regarding grouping or agglomerating concepts for algorithms. Also, case study 3.2 pays attention to the fact that algorithmic techniques in computational finance may be simply not enough to provide a suitable solution to the problem of pricing financial instruments, a “hell” not only for the risk irresponsible speculating represents to health of the financial system and the economy as a whole, but for computational finance because of the technical difficulties embedded in the artificiality that marks the appearance of certain financial products.

Evolutionary / Agent-based (A-B) techniques

The application of evolutionary algorithms, especially genetic algorithms (GA) has been overwhelmingly extensive. However, their use as the core of pricing algorithms has been limited.

Financial markets are particularly appealing applications for agent-based methods for a number of reasons. First, A-B methods are not affected by the fact that the key debates in finance about market efficiency and rationality are still unresolved. Second, financial time series contain several information patterns that are not well understood (the so-called ‘puzzles’). Third, financial markets provide a plenty of pricing and volume data for analysis purposes. Fourth, when considering evolution, financial markets provide a good approximation to a crude fitness measure through wealth or return performance. Finally, there are strong connections to relevant experimental results that in some cases operate at the same time scales as actual financial markets.160

159 Gyurkó, Lajos Gergely. “The evolution of algorithmic classes”. © Mathematical Institute, University of Oxford. 2010.

160 Lebaron, Blake. “Agent-based Computational Finance”. In Tesfatsion, Leigh and Judd, Kenneth.

Handbook of Computational Economics. Vol. 2, Chapter 24, pp. 1187-1233. Elsevier. 2006.

48 Case study 3.1 ‒ Grouping or agglomerating concepts for algorithms 161

The concepts of “family of algorithms”, “fundamental algorithmic technique” and “algorithmic domain”

are not intended to mean different ways of expressing the same idea, because even though these concepts are related, they are not exactly the same.

A family of algorithms is a set of algorithms that are grouped together because they address problems which might occur in very different settings but still are essentially alike in nature. So, the algorithms are related to each other not because they should share a similar structure, but because the underlying way to apply heuristics is similar. As an example, take the family of “planning algorithms”. Even though planning is a stage in many non-related processes, planning itself involve a number of common cognitive tasks. Another example would be “same rate approximation algorithms”, which refer to combinatorial optimization problems. Here the problem and the objective function may be different, but what defines the family is an approximation ratio (how well they perform), so the family is characterised in terms of performance (a family of algorithms that can achieve any constant approximation error in polynomial time is called a polynomial-time approximation scheme or PTAS). As a final example consider the context of social network analysis, and the problem of measuring consensus: a family can be defined based on a strategy to compute consensus from networked data. So, it should be noticed that the way a certain family of algorithms is to be characterised or defined is actually highly variable.

A fundamental algorithmic technique refers to much more homogeneous abstraction involving design techniques for algorithms. Here, it is the target problems that share some significant properties which makes them suitable to be solved using a particular technique. Take as an example two widely used techniques: greedy algorithms and dynamic programming. Both of them construct an optimal solution of a subproblem based on optimal solutions of smaller subproblems.

a) Greedy algorithms make the best locally optimum 'choice' by some simple heuristic, whatever improves a given situation at the moment, Most of networking related algorithms follow the greedy approach, which require that the problem has both the greedy-choice property (a global optimum can be arrived at by selecting a local optimum) and also exhibit optimal substructure (an optimal solution to the problem contains an optimal solution to subproblems).

b) Dynamic programming algorithms rely also in transforming complex problems into simple ones, simpler problems; its essential characteristic being the multistage nature of the optimization procedure. It also requires optimal substructure (an optimal solution to an instance contains an optimal solution to its sub-instances) and overlapping subproblems (the subproblems are not

161 Sources:

LaValle, Steven M. Planning Algorithms. Cambridge University Press, 2006.

‘Approximation Algorithms’, accessed online 02.05.2016 at

http://www.cs.yale.edu/homes/aspnes/pinewiki/ApproximationAlgorithms.html.

Brush, Eleanor R.; Krakauer, David C. and Flack, Jessica C. “A Family of Algorithms for Computing Consensus about Node State from Network Data”. PLoS Computational Biology journal. July, 2013.

(Available online at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3715438/ )

Gildea, Dan. Lectures 15 (Dynamic programming) and 16 (Greedy algorithms). Design and Analysis of Efficient Algorithms course (CSc282), University of Rochester, 2004.

GeeksforGeeks. Dynamic Programming | Set 1 (Overlapping Subproblems Property) & Set 2 (Optimal Substructure Property). Available at:

http://www.geeksforgeeks.org/dynamic-programming-set-1/

http://www.geeksforgeeks.org/dynamic-programming-set-2-optimal-substructure-property/

Heineman, George T., Pollice, Gary and Selkow, Stanley. Algorithms in a Nutshell. O’Reilly, 2009, p. 46.

Burgin, Mark. “Universality, Reducibility and Completeness” in Machines, Computations, and Universality, Durand-Lose, Jérôme and Margenstern, Maurice (Editors). Proceedings of the 5th International MCU Conference, Orleáns, France, p. 30-31. Springer –Verlag, 2007.

‘Paradigm’. The Business Dictionary. Available at

http://www.businessdictionary.com/definition/paradigm.html. Accessed online 04.05.2016.

49 independent so a partition of the problem into subproblems is not possible, but during the recursion same instances are referred to over and over again so the number of actual subproblems is comparatively small).

An algorithmic technique is therefore designed to tackle problems with certain clearly identifiable characteristics or properties (e.g. the Dijkstra's algorithm for finding shortest path to all nodes given a start node, the Prim's algorithm for finding a minimum spanning tree and the Huffman trees for data compression constitute instances not of a family of algorithms, but of a fundamental algorithmic technique), and as a result, the algorithms themselves also share similarities in their own structures, being possible to outline a “template” of how they look like. Such a template for a dynamic programming algorithms would be:

1. Characterise optimal sub-structure (description of solution of a subproblem based on the solution of smaller subproblems (usually described as a recurrence) 2. Recursively define the value of an optimal solution

3. Compute the value bottom up (might need an ordering of the subproblems from smallest to largest)

4. (If needed…) Construct an optimal solution

An algorithmic domain is still a different kind of grouping concept. Domains were initially intended as a generalization from application problems to application areas in taking advantage of common traits, this time across entire application areas. Domains proved to be useful as a conceptualising aid tool, often providing its own proprietary terminology (even domain-specific languages or DSLs) and promoting the generation of domain-specific software. Algorithm domains provide computer science practitioners with the knowledge about the application domains that are most amenable to certain types of algorithms. For example, an algorithmic domain defined for mobile web services would state that it is appropriate to use algorithms optimized for space usage and external storage rather than simply for the best overall time performance. Algorithmic domains and applications domains are not to be confused. Algorithmic domains are orthogonal to specific application domains, therefore they are more general and span several application domains. An example of algorithmic domain would be “database management systems” or “artificial intelligence applications” as they have their own set of algorithms that appear frequently. As one becomes familiar with algorithms and thinking of algorithms in terms of patterns, a natural taxonomy that maps application domains to algorithm domains might indeed emerge.

There are other important clustering concepts for algorithms, perhaps the most relevant one is that of a class of algorithms. A class is based on the construction of reduction and computer power of algorithms, rather than similarities in algorithm structure or design techniques, in the characteristics or properties of the problems, or across an application domain. This means than in order to understand a class of algorithms, we need to compare it other classes. Informally, a class A of algorithms is “weaker than or equivalent to” a class B of algorithms if algorithms form B can compute everything that algorithms on A can compute. It is denoted A ≤ B. Similarly, two classes of algorithms are functionally equivalent (or simply equivalent) if they compute the same class of functions, or relations, in the case of non-deterministic algorithms. As a matter of fact, in computer theory, also families of classes are defined. Theoretically speaking, any mathematical or rigorously defined programming model defines a class of algorithms. If this happens in practice I don’t know. Also worth mentioning is that when talking about classes from a computer theory perspective, a class is named as a system of algorithms, formally defined in terms of function mappings, domains and images; and that system it is the basis of a formal computational model.

Finally, for algorithmic paradigms there is no standard definition in the literature. A much higher level of abstraction of informational design. It involves a cognitive/intellectual framework in which algorithmic thinking takes place in the very first place. Ideally, a formal model of a paradigm should:

1) provide a set of rules for knowledge management; 2) provide a working framework for model uncertainty; 3) it incorporates a procedure to designate an ordered sequence of the degree of complexity of the models within the paradigm; 4) it incorporates general scope methodology to

50 produce models and algorithms; 5) all or nearly all of the patterns or models which can be derived from the paradigm core principles are expressible as working algorithms; and 6) from the practical point of view, it is possible to upper or lower bounding the scope of the paradigm in general, and for the limits for individual algorithms in the formal model in particular. Requirements 1, 2 and 4 define what is called intrinsic universality, which I will later recall.

Putting this discussion in the context of the problem of this thesis (pricing algorithms), there are two ways in which pricing algorithms can be conceptualised as a computational entity, either the algorithms themselves follow certain patterns or share some definitional properties, or it is the category of problems requiring pricing procedures which set up the basic conceptual idea of a

‘pricing’ algorithm’.