Wednesday, 1 November 2017

Fx Trading System Architektur


Trading Systems: Entwerfen Ihres Systems - Teil 1 13 Der vorangehende Abschnitt dieses Tutorials befasste sich mit den Elementen, aus denen sich ein Handelssystem zusammensetzte, und erörterten die Vor - und Nachteile der Verwendung eines solchen Systems in einem Live-Trading-Umfeld. In diesem Abschnitt bauen wir dieses Wissen auf, indem wir untersuchen, welche Märkte für den Systemhandel besonders gut geeignet sind. Wir werden dann einen tieferen Einblick in die verschiedenen Gattungen der Handelssysteme nehmen. Handel auf verschiedenen Märkten Aktienmärkte Der Aktienmarkt ist wahrscheinlich der häufigste Markt für den Handel, vor allem bei Anfängern. In dieser Arena, große Spieler wie Warren Buffett und Merrill Lynch dominieren, und traditionelle Wert und Wachstum investierende Strategien sind bei weitem die häufigste. Dennoch haben viele Institutionen erheblich in die Konzeption, Entwicklung und Umsetzung von Handelssystemen investiert. Einzelne Investoren treten diesem Trend, wenn auch langsam, bei. Hier sind einige wesentliche Faktoren zu berücksichtigen, wenn Handelssysteme in Aktienmärkten: 13 Die große Menge an verfügbaren Aktien ermöglicht es Händlern, Systeme auf vielen verschiedenen Arten von Aktien - alles von extrem volatilen over-the-counter (OTC) Aktien zu testen Nicht-flüchtigen blauen Chips. Die Wirksamkeit der Handelssysteme kann durch die geringe Liquidität einiger Aktien, insbesondere OTC - und Pink Sheet-Probleme, begrenzt werden. Provisionen können in Gewinne von erfolgreichen Trades zu essen, und können Verluste zu erhöhen. OTC - und Pink Sheet Equities verursachen oft zusätzliche Provisionsgebühren. Die wichtigsten Handelssysteme sind diejenigen, die Wert suchen - das heißt, Systeme, die verschiedene Parameter verwenden, um festzustellen, ob ein Wert unterbewertet ist im Vergleich zu seiner bisherigen Leistung, seine Kollegen oder den Markt im Allgemeinen. Devisenmarkt Der Devisenmarkt oder Forex. Ist der größte und liquideste Markt der Welt. Die Weltregierungen, Banken und andere große Institutionen Handel Trillionen von Dollar auf dem Forex-Markt jeden Tag. Die Mehrheit der institutionellen Händler auf der Forex beruht auf Handelssystemen. Das gleiche gilt für Einzelpersonen auf dem Forex, aber einige Handel auf Wirtschaftsberichte oder Zinsauszahlungen basiert. Hier sind einige wichtige Faktoren im Auge zu behalten, wenn Handelssysteme im Forex-Markt: Die Liquidität in diesem Markt - aufgrund der riesigen Menge - Macht Handelssysteme genauer und effektiver. Es gibt keine Provisionen in diesem Markt, nur Spreads. Daher ist es viel einfacher, viele Transaktionen ohne Erhöhung der Kosten zu machen. Im Vergleich zur Menge der verfügbaren Aktien oder Rohstoffe ist die Anzahl der Währungen zum Handel begrenzt. Aufgrund der Verfügbarkeit von exotischen Währungspaaren - also Währungen aus kleineren Ländern - ist das Spektrum der Volatilität nicht unbedingt begrenzt. Die wichtigsten Handelssysteme in Forex verwendet werden, die folgen Trends (ein beliebtes Sprichwort auf dem Markt ist der Trend ist Ihr Freund), oder Systeme, die kaufen oder verkaufen auf Breakouts. Dies liegt daran, wirtschaftliche Indikatoren oft große Preisbewegungen auf einmal verursachen. Futures Equity, Forex und Rohstoffmärkte alle bieten Futures-Handel. Dies ist ein beliebtes Fahrzeug für den Systemhandel aufgrund der höheren Menge an Leverage zur Verfügung und die erhöhte Liquidität und Volatilität. Allerdings können diese Faktoren schneiden in beide Richtungen: sie können entweder verstärken Sie Ihre Gewinne oder verstärken Sie Ihre Verluste. Aus diesem Grund ist der Einsatz von Futures in der Regel für fortgeschrittene individuelle und institutionelle Systemhändler vorbehalten. Dies liegt daran, Trading-Systeme in der Lage, Kapitalisierung auf dem Futures-Markt erfordern viel mehr Anpassung, Verwendung fortgeschrittener Indikatoren und viel länger dauern, um zu entwickeln. Also, Welches Bestes ist es bis zu den einzelnen Investoren zu entscheiden, welcher Markt am besten für den Systemhandel geeignet ist - jeder hat seine eigenen Vor-und Nachteile. Die meisten Menschen sind mehr vertraut mit den Aktienmärkten, und diese Vertrautheit macht die Entwicklung eines Handelssystems einfacher. Allerdings ist Forex häufig als die überlegene Plattform, um Handelssysteme laufen - vor allem unter erfahrenen Händlern. Darüber hinaus, wenn ein Händler beschließt, auf erhöhte Hebelwirkung und Volatilität zu nutzen, ist die Futures-Alternative immer offen. Letztlich liegt die Wahl in den Händen des Systementwicklers. Typen von Trading-Systemen Trend-Following Systems Die häufigste Methode des System-Trading ist die Trend-folgendes System. In seiner grundlegendsten Form, wartet dieses System einfach für eine signifikante Preisbewegung, dann kauft oder verkauft in diese Richtung. Diese Art von Systembanken auf die Hoffnung, dass diese Preisbewegungen den Trend beibehalten werden. Moving Average Systems Häufig in der technischen Analyse verwendet. Ein gleitender Durchschnitt ist ein Indikator, der einfach den Durchschnittspreis einer Aktie über einen bestimmten Zeitraum anzeigt. Das Wesen der Trends wird aus dieser Messung abgeleitet. Der häufigste Weg, um Ein-und Ausfahrt zu bestimmen, ist ein Crossover. Die Logik dahinter ist einfach: Ein neuer Trend wird festgestellt, wenn der Preis unter oder über dem historischen Durchschnittspreis liegt (Trend). Hier ist ein Diagramm, das sowohl den Preis (blaue Linie) als auch die 20-Tage-MA (rote Linie) von IBM darstellt: Breakout Systems Das grundlegende Konzept hinter dieser Art von System ist ähnlich dem eines gleitenden Durchschnittssystems. Die Idee ist, dass, wenn ein neues hoch oder niedrig ist, die Preisbewegung höchstwahrscheinlich in Richtung des Ausbruchs fortsetzen wird. Ein Indikator, der bei der Bestimmung von Ausbrüchen verwendet werden kann, ist ein einfaches Bollinger-Band-Overlay. Bollinger Bands zeigen Mittelwerte von hohen und niedrigen Preisen, und Breakouts auftreten, wenn der Preis die Kanten der Bands. Hier ist ein Diagramm, das Preis (blaue Linie) und Bollinger Bands (graue Linien) von Microsoft: Nachteile von Trendfolgesystemen: Empirische Entscheidungsfindung erforderlich - Bei der Bestimmung von Trends gibt es immer ein empirisches Element zu beachten: die Dauer von Der historische Trend. Zum Beispiel könnte der gleitende Durchschnitt für die letzten 20 Tage oder für die letzten fünf Jahre sein, so muss der Entwickler bestimmen, welche am besten für das System ist. Weitere Faktoren, die zu bestimmen sind, sind die durchschnittlichen Höhen und Tiefs in Breakout-Systemen. Lagging Nature - Gleitende Mittelwerte und Breakout-Systeme werden immer rückläufig sein. Mit anderen Worten, sie können nie den genauen oberen oder unteren Rand eines Trends. Dies führt zwangsläufig zu einem Verlust der potenziellen Gewinne, die manchmal erheblich sein kann. Whipsaw Effect - Unter den Marktkräften, die für den Erfolg der Trendfolgesysteme schädlich sind, ist dies einer der häufigsten. Der Peitscheneffekt tritt auf, wenn der gleitende Durchschnitt ein falsches Signal erzeugt, dh wenn der Mittelwert nur in den Bereich fällt, kehrt die Richtung plötzlich um. Dies kann zu massiven Verlusten führen, sofern nicht wirksame Stop-Loss - und Risikomanagementtechniken eingesetzt werden. Sideways Markets - Trendfolgesysteme sind naturgemäß in der Lage, nur in Märkten Geld zu verdienen, die tatsächlich Trend treiben. Aber auch die Märkte bewegen sich seitwärts. Innerhalb eines bestimmten Bereichs für einen längeren Zeitraum. Extreme Volatilität kann auftreten - Gelegentlich können Trendfolgesysteme eine extreme Volatilität aufweisen, aber der Trader muss mit seinem System bleiben. Die Unfähigkeit, dies zu tun, wird zu einem versicherten Ausfall führen. Countertrend Systems Grundsätzlich ist das Ziel mit dem countertrend-System, auf dem niedrigsten Tief zu kaufen und an der höchsten Höhe zu verkaufen. Der Hauptunterschied zwischen diesem und dem Trendfolgesystem besteht darin, dass das Gegenströmungssystem nicht selbstkorrigiert wird. Mit anderen Worten, es gibt keine festgelegte Zeit, um Positionen zu verlassen, und dies ergibt ein unbegrenztes Abwärtspotenzial. Arten von Countertrend-Systemen Viele verschiedene Arten von Systemen werden als Countertrend-Systeme betrachtet. Die Idee hier ist zu kaufen, wenn Schwung in eine Richtung beginnt zu verblassen. Dies wird am häufigsten mit Oszillatoren berechnet. Zum Beispiel kann ein Signal erzeugt werden, wenn Stochastik oder andere relative Stärkeindikatoren unter bestimmte Punkte fallen. Es gibt andere Arten von Countertrend Handelssysteme, aber alle von ihnen teilen das gleiche grundlegende Ziel - zu kaufen niedrig und hoch verkaufen. Nachteile von Countertrend Folgende Systeme: E mpirische Entscheidungsfindung erforderlich - Einer der Faktoren, auf die sich der Systementwickler beschränken muss, sind die Punkte, an denen die relativen Stärkeindikatoren verblassen. Extreme Volatilität kann auftreten - Diese Systeme können auch eine extreme Volatilität aufweisen, und eine Unfähigkeit, mit dem System trotz dieser Volatilität zu bleiben, wird zu einem gesicherten Ausfall führen. Unlimited Downside - Wie bereits erwähnt, gibt es unbegrenztes Downside-Potential, da das System nicht selbstkorrigiert (es gibt keine eingestellte Zeit, um Positionen zu verlassen). Fazit Die wichtigsten Märkte, für die Handelssysteme geeignet sind, sind die Aktien-, Devisen - und Futures-Märkte. Jeder dieser Märkte hat seine Vor - und Nachteile. Die beiden wichtigsten Gattungen der Handelssysteme sind die Trendfolger und die Gegen-Trendsysteme. Trotz ihrer Unterschiede bedürfen beide Arten von Systemen in ihren Entwicklungsstadien einer empirischen Entscheidungsfindung seitens des Entwicklers. Auch diese Systeme unterliegen extremer Volatilität und dies kann verlangen, einige Ausdauer - es ist wichtig, dass der System-Trader mit seinem System während dieser Zeiten bleiben. In der folgenden Tranche nehmen Sie einen genaueren Blick auf, wie man ein Handelssystem entwerfen und etwas von der Software sprechen, die Systemhändler verwenden, um ihr Leben zu erleichtern. Trading Systems: Entwerfen Ihres Systems - Teil 2Tradende Bodenarchitektur Trading Floor Architecture Executive Übersicht Erhöhte Konkurrenz, ein höheres Marktdatenvolumen und neue regulatorische Anforderungen sind einige der treibenden Kräfte hinter den Branchenveränderungen. Unternehmen versuchen, ihre Wettbewerbsfähigkeit durch eine ständige Änderung ihrer Handelsstrategien und die Erhöhung der Geschwindigkeit des Handels. Eine tragfähige Architektur muss die neuesten Technologien aus Netzwerk - und Anwendungsdomänen beinhalten. Es muss modular sein, um einen überschaubaren Weg zu schaffen, um jede Komponente mit minimaler Unterbrechung des Gesamtsystems zu entwickeln. Die von diesem Papier vorgeschlagene Architektur basiert daher auf einem Dienstleistungsrahmen. Wir untersuchen Dienste wie Ultra-Latenz-Messaging, Latenzüberwachung, Multicast, Computing, Speicherung, Daten - und Anwendungsvirtualisierung, Trading-Resiliency, Handelsmobilität und Thin Client. Die Lösung für die komplexen Anforderungen der Handelsplattform der nächsten Generation muss mit einer ganzheitlichen Denkweise aufgebaut werden, die die Grenzen traditioneller Silos wie Business und Technologie oder Anwendungen und Vernetzung überschreitet. Ziel dieses Dokuments ist es, Leitlinien für den Aufbau einer Handelsplattform mit extrem niedriger Latenzzeit zur Verfügung zu stellen, während der Rohdurchsatz und die Nachrichtenrate sowohl für Marktdaten als auch für FIX-Handelsaufträge optimiert werden. Um dies zu erreichen, schlagen wir die folgenden Latenzreduktionstechnologien vor: High-Speed-InterconnectInfiniBand oder 10 Gbit / s-Konnektivität für das Handels-Cluster Hochgeschwindigkeits-Messaging-Bus Anwendungsbeschleunigung über RDMA ohne Anwendung Recoder Echtzeit-Latenzüberwachung und - umkehrung von Trading Traffic auf den Pfad mit minimaler Latenz Branchentrends und Herausforderungen Trading-Architekturen der nächsten Generation müssen auf erhöhte Anforderungen an Geschwindigkeit, Volumen und Effizienz reagieren. Zum Beispiel wird das Volumen der Optionen Marktdaten voraussichtlich verdoppeln, nachdem die Einführung von Optionen Penny-Handel im Jahr 2007. Es gibt auch regulatorische Anforderungen für die beste Ausführung, die Handhabung Preisaktualisierungen mit Raten, die 1M msgsec Ansatz. Für den Austausch. Sie benötigen auch Sichtbarkeit in die Frische der Daten und Beweis, dass der Client die bestmögliche Ausführung erhalten hat. Kurzfristig sind Geschwindigkeit von Handel und Innovation die wichtigsten Unterscheidungsmerkmale. Eine zunehmende Anzahl von Trades werden durch algorithmische Handelsanwendungen behandelt, die so nah wie möglich an den Handelsausführungsort gebracht werden. Eine Herausforderung mit diesen quotblack-boxquot Handelsmotoren ist, dass sie die Volumenzunahme erhöhen, indem sie Aufträge nur annullieren und sie zurücksenden. Die Ursache für dieses Verhalten ist mangelnde Transparenz in die Veranstaltungsort bietet die beste Ausführung. Der menschliche Händler ist jetzt ein quotfinancial Ingenieur, ein quotquantquot (quantitativer Analytiker) mit Programmierungfähigkeiten, die handelnmodelle on the fly einstellen können. Unternehmen entwickeln neue Finanzinstrumente wie Wetterderivate oder Cross-Asset-Klassenhandel und müssen die neuen Applikationen schnell und skalierbar einsetzen. Langfristig sollte die Konkurrenzdifferenzierung nicht nur aus der Analyse, sondern auch aus der Analyse resultieren. Die Star-Trader von morgen übernehmen das Risiko, erreichen wahre Kundeneinsicht und konsequent den Markt (Quelle IBM: www-935.ibmservicesusimcpdfge510-6270-trader. pdf). Die Business-Resilienz ist seit dem 11. September 2001 ein wichtiges Anliegen von Handelsunternehmen. Lösungen in diesem Bereich reichen von redundanten Rechenzentren, die sich in verschiedenen Regionen befinden und an mehrere Handelsplätze angeschlossen sind, an virtuelle Händlerlösungen, die Power Traders die meisten Funktionalitäten eines Handelsraums anbieten An einem entfernten Ort. Die Finanzdienstleistungsbranche zählt zu den anspruchsvollsten IT-Anforderungen. Die Branche erlebt einen architektonischen Wandel hin zu Services-Oriented Architecture (SOA), Web Services und Virtualisierung von IT-Ressourcen. SOA nutzt die Erhöhung der Netzwerkgeschwindigkeit, um eine dynamische Bindung und Virtualisierung von Softwarekomponenten zu ermöglichen. Dies ermöglicht die Erstellung neuer Anwendungen, ohne die Investitionen in bestehende Systeme und Infrastrukturen zu verlieren. Das Konzept hat das Potential, die Art und Weise, wie die Integration getan wird, zu revolutionieren, was eine deutliche Reduktion der Komplexität und Kosten einer solchen Integration ermöglicht (gigaspacesdownloadMerrilLynchGigaSpacesWP. pdf). Ein weiterer Trend ist die Konsolidierung von Servern in Rechenzentrums-Serverfarmen, während Händler-Desks nur KVM-Erweiterungen und ultradünne Clients (z. B. SunRay - und HP-Blade-Lösungen) haben. Hochgeschwindigkeits-Metro Area Networks ermöglichen es, Marktdaten zwischen verschiedenen Standorten zu multicastieren und so die Virtualisierung des Handelsraums zu ermöglichen. High-Level-Architektur Abbildung 1 zeigt die Architektur einer Handelsumgebung auf hohem Niveau. Die Ticker-Anlage und die algorithmischen Trading Engines befinden sich im Hochleistungs-Trading-Cluster im Rechenzentrum der Firma oder an der Börse. Die menschlichen Händler befinden sich im Bereich der Endbenutzeranwendungen. Funktionell gibt es zwei Anwendungskomponenten im Enterprise-Trading-Umfeld, Verleger und Abonnenten. Der Messaging-Bus stellt den Kommunikationsweg zwischen Publishern und Abonnenten zur Verfügung. Es gibt zwei Arten von Traffic, die für ein Handelsumfeld spezifisch sind: Market DataCarries-Preisinformationen für Finanzinstrumente, Nachrichten und andere wertschöpfende Informationen wie Analytics. Es ist unidirektional und sehr Latenz empfindlich, in der Regel über UDP Multicast geliefert. Es wird in updatessec gemessen. Und in Mbps. Marktdatenströme von einem oder mehreren externen Feeds, die von Marktdatenanbietern wie Börsen, Datenaggregatoren und ECNs kommen. Jeder Anbieter hat sein eigenes Marktdatenformat. Die Daten werden von Feed-Handlern, spezialisierten Anwendungen, die die Daten normalisieren und reinigen, empfangen und dann an Datenkonsumenten, wie z. B. Preismodule, algorithmische Handelsanwendungen oder menschliche Händler, gesendet. Sell-Side-Unternehmen senden auch die Marktdaten an ihre Kunden, Buy-Side-Firmen wie Investmentfonds, Hedgefonds und andere Vermögensverwalter. Einige Buy-Side-Unternehmen können entscheiden, Direkt-Feeds von den Austausch, Reduzierung der Latenz zu erhalten. Abbildung 1 Trading-Architektur für einen Buy SideSell Side Firm Es gibt keine Industrie-Standard für Markt-Daten-Formate. Jeder Austausch hat ihr eigenes Format. Finanzdienstleister wie Reuters und Bloomberg aggregieren verschiedene Quellen von Marktdaten, normalisieren sie und fügen Neuigkeiten oder Analysen hinzu. Beispiele für konsolidierte Feeds sind RDF (Reuters Data Feed), RWF (Reuters Wire Format) und Bloomberg Professional Services Data. Um Marktdaten mit geringerer Latenz zu liefern, haben beide Anbieter Echtzeit-Marktdaten-Feeds veröffentlicht, die weniger verarbeitet und weniger analytisch sind: Bloomberg B-PipeWith B-Pipe, Bloomberg dekoppelt ihre Marktdaten-Feeds von ihrer Vertriebsplattform aus Ist nicht erforderlich für get B-Pipe. Wombat und Reuters Feed-Handler haben angekündigt, Unterstützung für B-Pipe. Ein Unternehmen kann entscheiden, Feeds direkt von einem Austausch zu empfangen, um die Latenz zu reduzieren. Die Verstärkung der Übertragungsgeschwindigkeit kann zwischen 150 Millisekunden bis 500 Millisekunden liegen. Diese Feeds sind komplexer und teurer und die Firma muss ihre eigene Ticker-Anlage aufbauen und pflegen (financetechfeaturedshowArticle. jhtmlarticleID60404306). Trading OrdersThis Art von Traffic trägt die tatsächlichen Trades. Es ist bidirektional und sehr latenzempfindlich. Es wird in messagessec gemessen. Und Mbps. Die Aufträge stammen von einer Kaufseite oder Verkaufsseite Firma und werden an Handelsplätze wie eine Börse oder ECN zur Ausführung gesendet. Das häufigste Format für den Auftragstransport ist FIX (Financial Information eXchangefixprotocol. org). Die Applikationen, die FIX-Meldungen verarbeiten, heißen FIX-Engines und operieren mit Order Management Systemen (OMS). Eine Optimierung für FIX heißt FAST (Fix Adapted for Streaming), das ein Komprimierungsschema verwendet, um die Nachrichtenlänge zu reduzieren und die Latenz zu reduzieren. FAST ist mehr auf die Bereitstellung von Marktdaten ausgerichtet und hat das Potenzial, ein Standard zu werden. FAST kann auch als Komprimierungsschema für proprietäre Marktdatenformate verwendet werden. Um die Latenz zu reduzieren, können sich Unternehmen entscheiden, Direct Market Access (DMA) zu errichten. DMA ist der automatisierte Prozess, um einen Wertpapierauftrag direkt an einen Ausführungsort zu leiten und so die Intervention durch einen Dritten zu vermeiden (towergroupresearchcontentglossary. jsppage1ampglossaryId383). DMA erfordert eine direkte Verbindung zum Ausführungsort. Der Messaging-Bus ist Middleware-Software von Anbietern wie Tibco, 29West, Reuters RMDS oder einer Open-Source-Plattform wie AMQP. Der Messaging-Bus verwendet einen zuverlässigen Mechanismus, um Nachrichten zu übermitteln. Der Transport kann über TCPIP (TibcoEMS, 29West, RMDS und AMQP) oder UDPmulticast (TibcoRV, 29West und RMDS) erfolgen. Ein wichtiges Konzept in der Nachrichtenverteilung ist der quottopische Stream, der eine Teilmenge von Marktdaten ist, die durch Kriterien wie Tickersymbol, Industrie oder einen bestimmten Korb von Finanzinstrumenten definiert sind. Abonnenten werden Themengruppen zugeordnet, die einem oder mehreren Unterthemen zugeordnet sind, um nur die relevanten Informationen zu erhalten. In der Vergangenheit erhielten alle Händler alle Marktdaten. Bei den derzeitigen Verkehrsmengen wäre dies suboptimal. Das Netzwerk spielt eine wichtige Rolle im Handelsumfeld. Die Marktdaten werden zum Handelsplatz getragen, wo sich die menschlichen Händler über ein Hochgeschwindigkeitsnetzwerk des Campus oder Metro Area befinden. Hohe Verfügbarkeit und niedrige Latenzzeiten sowie hoher Durchsatz sind die wichtigsten Kennzahlen. Die leistungsstarke Handelsumgebung verfügt über die meisten Komponenten in der Data Center-Serverfarm. Um die Latenz zu minimieren, müssen sich die algorithmischen Trading-Engines in der Nähe von Feed-Handlern, FIX-Engines und Order-Management-Systemen befinden. Ein alternatives Bereitstellungsmodell weist die algorithmischen Handelssysteme auf, die sich an einer Vermittlungsstelle oder einem Dienstanbieter mit schneller Konnektivität zu mehreren Vermittlungsstellen befinden. Bereitstellungsmodelle Es gibt zwei Bereitstellungsmodelle für eine leistungsfähige Handelsplattform. Die Unternehmen haben die Wahl zwischen einem Rechenzentrum der Handelsgesellschaft (Abbildung 2) Dies ist das traditionelle Modell, in dem eine vollwertige Handelsplattform von der Firma entwickelt und betrieben wird, die über Kommunikationsverbindungen zu allen Handelsplätzen verfügt. Latenz variiert mit der Geschwindigkeit der Links und die Anzahl der Hops zwischen der Firma und den Veranstaltungsorten. Abbildung 2 Traditionelles Bereitstellungsmodell Koordination am Handelsplatz (Börsen, Finanzdienstleister (FSP)) (Abbildung 3) Das Handelsunternehmen setzt seine automatisierte Handelsplattform so nah wie möglich an die Ausführungsorte, um die Latenz zu minimieren. Abbildung 3 Verteilungsmodell-Services-orientierte Trading-Architektur Wir schlagen ein dienstleistungsorientiertes Framework für den Aufbau der Handelsarchitektur der nächsten Generation vor. Dieser Ansatz bietet einen konzeptionellen Rahmen und einen Implementierungspfad, der auf Modularisierung und Minimierung von Abhängigkeiten beruht. Dieses Framework stellt Unternehmen eine Methodologie zur Verfügung, um ihren gegenwärtigen Zustand in Bezug auf Dienstleistungen zu bewerten Priorisierung der Dienste basierend auf ihrem Wert für das Unternehmen Entwickeln Sie die Handelsplattform in den gewünschten Zustand mit einem modularen Ansatz Die Hochleistungs-Handelsarchitektur setzt auf die folgenden Dienstleistungen, wie Definiert durch das in Abbildung 4 dargestellte Service-Architektur-Framework. Abbildung 4 Service Architektur Framework für High Performance Trading Ultra-Low Latency Messaging Service Dieser Service wird von dem Messaging-Bus bereitgestellt, der ein Softwaresystem ist, Viele Anwendungen. Das System besteht aus: Ein Satz von vordefinierten Nachrichtenschemata Ein Satz von gemeinsamen Befehlsnachrichten Eine gemeinsame Anwendungsinfrastruktur zum Senden der Nachrichten an Empfänger. Die gemeinsame Infrastruktur kann auf einem Message-Broker oder einem publishsubscribe-Modell basieren. Die wichtigsten Anforderungen für den Messaging-Bus der nächsten Generation (Quelle 29West): Niedrigstmögliche Latenzzeit (zB weniger als 100 Mikrosekunden) Stabilität bei hoher Last (zB mehr als 1,4 Millionen msg.) Kontrolle und Flexibilität (Ratensteuerung und konfigurierbare Transporte) Sind Bemühungen in der Industrie, den Messaging-Bus zu standardisieren. Advanced Message Queuing Protocol (AMQP) ist ein Beispiel für einen offenen Standard, der von J. P. Morgan Chase unterstützt wird und von einer Gruppe von Anbietern wie Cisco, Envoy Technologies, Red Hat, TWIST Process Innovations, Iona, 29West und iMatix unterstützt wird. Zwei der Hauptziele sind, einen einfacheren Weg zur Interoperabilität für Anwendungen bereitzustellen, die auf verschiedenen Plattformen und Modularität geschrieben sind, so dass die Middleware einfach entwickelt werden kann. Ganz allgemein ist ein AMQP-Server analog zu einem E-Mail-Server, wobei jede Vermittlungsstelle als Nachrichtenübertragungsagent und jede Nachrichtenwarteschlange als Mailbox fungiert. Die Bindungen definieren die Routingtabellen in jedem Transferagent. Publisher senden Nachrichten an einzelne Übertragungsagenten, die dann die Nachrichten in Postfächer weiterleiten. Verbraucher nehmen Nachrichten aus Postfächern, die ein leistungsfähiges und flexibles Modell schafft, das einfach ist (Quelle: amqp. careikiwikitiki-index. phppageOpenApproachWhyAMQP). Latency Monitoring Service Die wichtigsten Voraussetzungen für diesen Service sind: Granularität der Messungen in Millisekunden Echtzeit-Sichtbarkeit ohne Hinzufügung von Latenzzeiten für den Traffic Traffic Fähigkeit, die Latenz der Anwendungsverarbeitung von der Netzwerk-Transit-Latenz zu unterscheiden Fähigkeit, hohe Nachrichtenraten zu bewältigen Bieten Sie eine programmgesteuerte Schnittstelle für Um Latenzdaten zu empfangen, so dass sich algorithmische Trading Engines an sich ändernde Bedingungen anpassen können. Korrelieren von Netzwerkereignissen mit Anwendungsereignissen für Fehlerbehandlungszwecke Latenzzeit kann als das Zeitintervall definiert werden, zwischen dem eine Trade Order gesendet wird, und wenn dieselbe Reihenfolge quittiert und gehandelt wird Von der empfangenden Partei. Die Lösung der Latenzproblematik ist ein komplexes Problem, das einen ganzheitlichen Ansatz erfordert, der alle Latenzquellen identifiziert und verschiedene Technologien auf verschiedenen Ebenen des Systems anwendet. Fig. 5 zeigt die Vielfalt der Komponenten, die Latenzzeiten an jeder Schicht des OSI-Stapels einbringen können. Es bildet auch jede Quelle der Latenz mit einer möglichen Lösung und einer Überwachungslösung ab. Dieser mehrschichtige Ansatz bietet Unternehmen eine strukturierte Möglichkeit, das Latenzproblem anzugreifen, wobei jede Komponente als Dienstleistung betrachtet und konsequent über das Unternehmen hinweg behandelt werden kann. Eine genaue Messung des dynamischen Zustands dieses Zeitintervalls über alternative Routen und Ziele kann bei taktischen Handelsentscheidungen eine große Hilfe sein. Die Fähigkeit, die genaue Lage der Verzögerungen zu identifizieren, sei es im Kundennetznetz, auf dem zentralen Verarbeitungsknoten oder auf der Transaktionsanwendungsebene, bestimmt entscheidend die Fähigkeit von Dienstanbietern, ihre vertraglichen Vereinbarungen auf Handelsniveau (SLAs) zu erfüllen. Für Buy-Side - und Sell-Side-Formulare sowie für Marktdaten-Syndikatoren erfolgt die schnelle Identifikation und Beseitigung von Engpässen direkt in verbesserte Handels - und Ertragsmöglichkeiten. Abbildung 5 Latenzmanagement-Architektur Cisco Low-Latency-Monitoring-Tools Traditionelle Netzwerk-Monitoring-Tools arbeiten mit Minuten oder Sekunden Granularität. Handelsplattformen der nächsten Generation, insbesondere solche, die den algorithmischen Handel unterstützen, erfordern Latenzen von weniger als 5 ms und extrem niedrige Paketverluste. Auf einem Gigabit-LAN ​​kann ein 100-ms-Microburst verursachen, dass 10.000 Transaktionen verloren gehen oder übermäßig verzögert werden. Cisco bietet seinen Kunden eine Auswahl an Tools, um die Latenzzeiten in einer Handelsumgebung zu messen: Bandbreiten-Qualitätsmanager (BQM) (OEM von Corvil) Cisco AON-basierte Finanzdienstleistungs-Latenzüberwachungslösung (FSMS) Bandbreiten-Qualitätsmanager Bandwidth Quality Manager (BQM) 4.0 ist Ein Netzwerk-Performance-Management-Produkt der nächsten Generation, das es Kunden ermöglicht, ihr Netzwerk auf kontrollierte Latenz - und Verlustleistung zu überwachen und bereitzustellen. Während BQM nicht ausschließlich auf Handelsnetze ausgerichtet ist, ist die Mikrosekundenvisibilität in Verbindung mit intelligenten Bandbreitenoptionen ideal für diese anspruchsvollen Umgebungen. Cisco BQM 4.0 implementiert eine breite Palette von patentierten und zum Patent angemeldeten Verkehrs - und Netzwerkanalysetechnologien, die dem Anwender eine noch nie dagewesene Sichtbarkeit und ein besseres Verständnis der Optimierung des Netzwerks für maximale Anwendungsleistung bieten. Cisco BQM wird nun auf der Produktfamilie der Cisco Application Deployment Engine (ADE) unterstützt. Die Cisco ADE-Produktfamilie ist die Plattform für Cisco Network Management-Anwendungen. BQM-Vorteile Die Cisco BQM-Mikrosichtbarkeit ist die Fähigkeit, Latenz, Jitter und Verluste, die Verkehrsereignisse verursachen, zu detektieren, zu messen und zu analysieren, bis hin zu Mikrosekundenebenen mit einer Paketauflösung. Dadurch kann Cisco BQM die Auswirkungen von Verkehrsereignissen auf Netzwerklatenz, Jitter und Verlust erkennen und bestimmen. Kritisch für Handelsumgebungen ist, dass BQM Latenz-, Verlust - und Jitter-Messungen einseitig für TCP - und UDP - (Multicast-) Datenverkehr unterstützen kann. Das bedeutet, dass sie nahtlos sowohl für Trading - als auch für Marktdaten-Feeds berichtet. BQM erlaubt es dem Benutzer, einen umfassenden Satz von Schwellenwerten (gegen Microburst-Aktivität, Latenz, Verlust, Jitter, Auslastung usw.) auf allen Schnittstellen festzulegen. BQM betreibt dann eine Hintergrundwalzenpaketaufnahme. Wenn eine Schwellenverletzung oder ein anderes potentielles Leistungsverschlechterungsereignis auftritt, löst sie Cisco BQM aus, um die Paketaufnahme zur späteren Analyse auf dem Datenträger zu speichern. Dies ermöglicht dem Benutzer, den Anwendungsverkehr, der von der Leistungsverschlechterung betroffen war, zu untersuchen (quiethe victimsquot) und den Verkehr, der die Leistungsverschlechterung verursacht hat (quich der culpritsquot). Dies kann die Zeit für die Diagnose und Behebung von Netzwerkleistungsproblemen erheblich verkürzen. BQM ist auch in der Lage, detaillierte Empfehlungen für die Bereitstellung von Empfehlungen für die Bandbreite und Qualität des Dienstes (QoS) bereitzustellen, die der Benutzer direkt anwenden kann, um die gewünschte Netzwerkleistung zu erreichen. BQM-Messungen veranschaulicht Um den Unterschied zwischen einigen der herkömmlicheren Messtechniken und der Sichtbarkeit von BQM zu verstehen, können wir einige Vergleichsgrafiken betrachten. Im ersten Satz von Graphen (Abbildung 6 und Abbildung 7) sehen wir die Differenz zwischen der Latenzzeit, die mit dem BQMs passivem Netzwerkqualitätsmonitor (PNQM) gemessen wird, und der Latenz, die durch die Injektion von Ping-Paketen alle 1 Sekunde in den Verkehrsstrom gemessen wird. In Abbildung 6 sehen wir die Latenz, die von 1-Sekunden-ICMP-Ping-Paketen für den realen Netzwerkverkehr berichtet wird (sie wird durch 2 geteilt, um eine Schätzung für die Einwegverzögerung zu geben). Es zeigt die Verzögerung bequem unter etwa 5ms für fast die ganze Zeit. Abbildung 6 Latenz, die von 1-Sekunden-ICMP-Ping-Paketen für den realen Netzwerkverkehr berichtet wird In Abbildung 7. sehen wir die Latenz, die PNQM für denselben Traffic zur gleichen Zeit gemeldet hat. Hier sehen wir, dass wir durch die Messung der Einweg-Latenz der eigentlichen Anwendungspakete ein völlig anderes Bild erhalten. Hier wird die Latenz etwa 20 ms schweben, mit gelegentlichen Bursts weit höher. Die Erklärung ist, dass, weil ping sendet Pakete nur jede Sekunde, es ist völlig fehlt die meisten der Anwendungsverkehr Latenz. Tatsächlich zeigen die Ping-Ergebnisse typischerweise nur die Ausbreitungsverzögerung für die Rundreise anstelle der realistischen Anwendungslatenz im gesamten Netzwerk an. Abbildung 7 Latenz, die von PNQM für realen Netzwerkverkehr gemeldet wird Im zweiten Beispiel (Abbildung 8) sehen wir den Unterschied zwischen den angegebenen Linkbelastungs - oder Sättigungswerten zwischen einer 5-minütigen mittleren Ansicht und einer 5-ms-Microburst-Ansicht (BQM kann über Microbursts berichten Bis ungefähr 10-100 Nanosekunden Genauigkeit). Die grüne Linie zeigt, dass die durchschnittliche Auslastung bei 5-Minuten-Mitteln niedrig ist, möglicherweise bis zu 5 Mbitss. Das Dunkelblau-Diagramm zeigt die 5 ms Mikroburst-Aktivität, die zwischen 75 Mbitss und 100 Mbitss, die LAN-Geschwindigkeit, effektiv erreicht. BQM zeigt dieses Granularitätsniveau für alle Anwendungen und es gibt auch klare Bereitstellungsregeln, die es dem Benutzer ermöglichen, diese Microbursts zu steuern oder zu neutralisieren. Abbildung 8: Unterschied zwischen einer 5-Minuten-Durchschnittsanzeige und einer 5-ms-Microburst-Ansicht BQM-Bereitstellung im Trading-Netzwerk Abbildung 9 zeigt eine typische BQM-Implementierung in einem Handelsnetzwerk. Abbildung 9 Typische BQM-Implementierung in einem Trading-Netzwerk BQM kann dann verwendet werden, um diese Arten von Fragen zu beantworten: Sind alle meine Gigabit-LAN-Kernverbindungen für mehr als X Millisekunden gesättigt Ist dies verursacht Verlust Welche Verbindungen würden am meisten von einem Upgrade auf Etherchannel oder profitieren 10 Gigabit-Geschwindigkeiten Was Application Traffic verursacht die Sättigung meiner 1 Gigabit-Links Ist eines der Marktdaten erleben End-to-End-Verlust Wie viel zusätzliche Latenz ist das Failover-Rechenzentrum Erfahrung Ist dieser Link richtig dimensioniert, um mit microbursts befassen sind meine Händler Erhalten niedrige Latenz Updates aus der Marktdatenverteilungsschicht Sind sie sehen alle Verzögerungen größer als X Millisekunden In der Lage, diese Fragen einfach und effektiv zu sparen spart Zeit und Geld in den Betrieb des Handelsnetzes. BQM ist ein wichtiges Instrument, um die Sichtbarkeit in Marktdaten und Handelsumgebungen zu erhöhen. Es bietet körnige End-to-End-Latenzmessungen in komplexen Infrastrukturen, die umfangreiche Datenbewegungen erleben. Das effektive Erfassen von Microbursts in Sub-Millisekunden-Ebenen und das Empfangen von Expertenanalysen für ein bestimmtes Ereignis ist von unschätzbarem Wert für den Handel von Architekten. Empfehlungen zur Bereitstellung von intelligenter Bandbreite, wie Sizing und What-If-Analyse, sorgen für mehr Agilität, um auf volatile Marktbedingungen zu reagieren. Da die Explosion des algorithmischen Handels und die zunehmende Nachrichtenrate weiter anhält, bietet BQM in Verbindung mit dem QoS-Tool die Möglichkeit, QoS-Richtlinien zu implementieren, die kritische Handelsanwendungen schützen können. Cisco Financial Services Latency Monitoring Solution Cisco and Trading Metrics have collaborated on latency monitoring solutions for FIX order flow and market data monitoring. Cisco AON technology is the foundation for a new class of network-embedded products and solutions that help merge intelligent networks with application infrastructure, based on either service-oriented or traditional architectures. Trading Metrics is a leading provider of analytics software for network infrastructure and application latency monitoring purposes (tradingmetrics ). The Cisco AON Financial Services Latency Monitoring Solution (FSMS) correlated two kinds of events at the point of observation: Network events correlated directly with coincident application message handling Trade order flow and matching market update events Using time stamps asserted at the point of capture in the network, real-time analysis of these correlated data streams permits precise identification of bottlenecks across the infrastructure while a trade is being executed or market data is being distributed. By monitoring and measuring latency early in the cycle, financial companies can make better decisions about which network serviceand which intermediary, market, or counterpartyto select for routing trade orders. Likewise, this knowledge allows more streamlined access to updated market data (stock quotes, economic news, etc.), which is an important basis for initiating, withdrawing from, or pursuing market opportunities. The components of the solution are: AON hardware in three form factors: AON Network Module for Cisco 2600280037003800 routers AON Blade for the Cisco Catalyst 6500 series AON 8340 Appliance Trading Metrics MampA 2.0 software, which provides the monitoring and alerting application, displays latency graphs on a dashboard, and issues alerts when slowdowns occur (tradingmetricsTMbrochure. pdf ). Figure 10 AON-Based FIX Latency Monitoring Cisco IP SLA Cisco IP SLA is an embedded network management tool in Cisco IOS which allows routers and switches to generate synthetic traffic streams which can be measured for latency, jitter, packet loss, and other criteria (ciscogoipsla ). Two key concepts are the source of the generated traffic and the target. Both of these run an IP SLA quotresponder, quot which has the responsibility to timestamp the control traffic before it is sourced and returned by the target (for a round trip measurement). Various traffic types can be sourced within IP SLA and they are aimed at different metrics and target different services and applications. The UDP jitter operation is used to measure one-way and round-trip delay and report variations. As the traffic is time stamped on both sending and target devices using the responder capability, the round trip delay is characterized as the delta between the two timestamps. A new feature was introduced in IOS 12.3(14)T, IP SLA Sub Millisecond Reporting, which allows for timestamps to be displayed with a resolution in microseconds, thus providing a level of granularity not previously available. This new feature has now made IP SLA relevant to campus networks where network latency is typically in the range of 300-800 microseconds and the ability to detect trends and spikes (brief trends) based on microsecond granularity counters is a requirement for customers engaged in time-sensitive electronic trading environments. As a result, IP SLA is now being considered by significant numbers of financial organizations as they are all faced with requirements to: Report baseline latency to their users Trend baseline latency over time Respond quickly to traffic bursts that cause changes in the reported latency Sub-millisecond reporting is necessary for these customers, since many campus and backbones are currently delivering under a second of latency across several switch hops. Electronic trading environments have generally worked to eliminate or minimize all areas of device and network latency to deliver rapid order fulfillment to the business. Reporting that network response times are quotjust under one millisecondquot is no longer sufficient the granularity of latency measurements reported across a network segment or backbone need to be closer to 300-800 micro-seconds with a degree of resolution of 100 igrave seconds. IP SLA recently added support for IP multicast test streams, which can measure market data latency. A typical network topology is shown in Figure 11 with the IP SLA shadow routers, sources, and responders. Figure 11 IP SLA Deployment Computing Services Computing services cover a wide range of technologies with the goal of eliminating memory and CPU bottlenecks created by the processing of network packets. Trading applications consume high volumes of market data and the servers have to dedicate resources to processing network traffic instead of application processing. Transport processingAt high speeds, network packet processing can consume a significant amount of server CPU cycles and memory. An established rule of thumb states that 1Gbps of network bandwidth requires 1 GHz of processor capacity (source Intel white paper on IO acceleration inteltechnologyioacceleration306517.pdf ). Intermediate buffer copyingIn a conventional network stack implementation, data needs to be copied by the CPU between network buffers and application buffers. This overhead is worsened by the fact that memory speeds have not kept up with increases in CPU speeds. For example, processors like the Intel Xeon are approaching 4 GHz, while RAM chips hover around 400MHz (for DDR 3200 memory) (source Intel inteltechnologyioacceleration306517.pdf ). Context switchingEvery time an individual packet needs to be processed, the CPU performs a context switch from application context to network traffic context. This overhead could be reduced if the switch would occur only when the whole application buffer is complete. Figure 12 Sources of Overhead in Data Center Servers TCP Offload Engine (TOE)Offloads transport processor cycles to the NIC. Moves TCPIP protocol stack buffer copies from system memory to NIC memory. Remote Direct Memory Access (RDMA)Enables a network adapter to transfer data directly from application to application without involving the operating system. Eliminates intermediate and application buffer copies (memory bandwidth consumption). Kernel bypass Direct user-level access to hardware. Dramatically reduces application context switches. Figure 13 RDMA and Kernel Bypass InfiniBand is a point-to-point (switched fabric) bidirectional serial communication link which implements RDMA, among other features. Cisco offers an InfiniBand switch, the Server Fabric Switch (SFS): ciscoapplicationpdfenusguestnetsolns500c643cdccont0900aecd804c35cb. pdf. Figure 14 Typical SFS Deployment Trading applications benefit from the reduction in latency and latency variability, as proved by a test performed with the Cisco SFS and Wombat Feed Handlers by Stac Research: Application Virtualization Service De-coupling the application from the underlying OS and server hardware enables them to run as network services. One application can be run in parallel on multiple servers, or multiple applications can be run on the same server, as the best resource allocation dictates. This decoupling enables better load balancing and disaster recovery for business continuance strategies. The process of re-allocating computing resources to an application is dynamic. Using an application virtualization system like Data Synapses GridServer, applications can migrate, using pre-configured policies, to under-utilized servers in a supply-matches-demand process (networkworldsupp2005ndc1022105virtual. htmlpage2 ). There are many business advantages for financial firms who adopt application virtualization: Faster time to market for new products and services Faster integration of firms following merger and acquisition activity Increased application availability Better workload distribution, which creates more quothead roomquot for processing spikes in trading volume Operational efficiency and control Reduction in IT complexity Currently, application virtualization is not used in the trading front-office. One use-case is risk modeling, like Monte Carlo simulations. As the technology evolves, it is conceivable that some the trading platforms will adopt it. Data Virtualization Service To effectively share resources across distributed enterprise applications, firms must be able to leverage data across multiple sources in real-time while ensuring data integrity. With solutions from data virtualization software vendors such as Gemstone or Tangosol (now Oracle), financial firms can access heterogeneous sources of data as a single system image that enables connectivity between business processes and unrestrained application access to distributed caching. The net result is that all users have instant access to these data resources across a distributed network (gridtoday030210101061.html ). This is called a data grid and is the first step in the process of creating what Gartner calls Extreme Transaction Processing (XTP) (gartnerDisplayDocumentrefgsearchampid500947 ). Technologies such as data and applications virtualization enable financial firms to perform real-time complex analytics, event-driven applications, and dynamic resource allocation. One example of data virtualization in action is a global order book application. An order book is the repository of active orders that is published by the exchange or other market makers. A global order book aggregates orders from around the world from markets that operate independently. The biggest challenge for the application is scalability over WAN connectivity because it has to maintain state. Todays data grids are localized in data centers connected by Metro Area Networks (MAN). This is mainly because the applications themselves have limitsthey have been developed without the WAN in mind. Figure 15 GemStone GemFire Distributed Caching Before data virtualization, applications used database clustering for failover and scalability. This solution is limited by the performance of the underlying database. Failover is slower because the data is committed to disc. With data grids, the data which is part of the active state is cached in memory, which reduces drastically the failover time. Scaling the data grid means just adding more distributed resources, providing a more deterministic performance compared to a database cluster. Multicast Service Market data delivery is a perfect example of an application that needs to deliver the same data stream to hundreds and potentially thousands of end users. Market data services have been implemented with TCP or UDP broadcast as the network layer, but those implementations have limited scalability. Using TCP requires a separate socket and sliding window on the server for each recipient. UDP broadcast requires a separate copy of the stream for each destination subnet. Both of these methods exhaust the resources of the servers and the network. The server side must transmit and service each of the streams individually, which requires larger and larger server farms. On the network side, the required bandwidth for the application increases in a linear fashion. For example, to send a 1 Mbps stream to 1000recipients using TCP requires 1 Gbps of bandwidth. IP multicast is the only way to scale market data delivery. To deliver a 1 Mbps stream to 1000 recipients, IP multicast would require 1 Mbps. The stream can be delivered by as few as two serversone primary and one backup for redundancy. There are two main phases of market data delivery to the end user. In the first phase, the data stream must be brought from the exchange into the brokerages network. Typically the feeds are terminated in a data center on the customer premise. The feeds are then processed by a feed handler, which may normalize the data stream into a common format and then republish into the application messaging servers in the data center. The second phase involves injecting the data stream into the application messaging bus which feeds the core infrastructure of the trading applications. The large brokerage houses have thousands of applications that use the market data streams for various purposes, such as live trades, long term trending, arbitrage, etc. Many of these applications listen to the feeds and then republish their own analytical and derivative information. For example, a brokerage may compare the prices of CSCO to the option prices of CSCO on another exchange and then publish ratings which a different application may monitor to determine how much they are out of synchronization. Figure 16 Market Data Distribution Players The delivery of these data streams is typically over a reliable multicast transport protocol, traditionally Tibco Rendezvous. Tibco RV operates in a publish and subscribe environment. Each financial instrument is given a subject name, such as CSCO. last. Each application server can request the individual instruments of interest by their subject name and receive just a that subset of the information. This is called subject-based forwarding or filtering. Subject-based filtering is patented by Tibco. A distinction should be made between the first and second phases of market data delivery. The delivery of market data from the exchange to the brokerage is mostly a one-to-many application. The only exception to the unidirectional nature of market data may be retransmission requests, which are usually sent using unicast. The trading applications, however, are definitely many-to-many applications and may interact with the exchanges to place orders. Figure 17 Market Data Architecture Design Issues Number of GroupsChannels to Use Many application developers consider using thousand of multicast groups to give them the ability to divide up products or instruments into small buckets. Normally these applications send many small messages as part of their information bus. Usually several messages are sent in each packet that are received by many users. Sending fewer messages in each packet increases the overhead necessary for each message. In the extreme case, sending only one message in each packet quickly reaches the point of diminishing returnsthere is more overhead sent than actual data. Application developers must find a reasonable compromise between the number of groups and breaking up their products into logical buckets. Consider, for example, the Nasdaq Quotation Dissemination Service (NQDS). The instruments are broken up alphabetically: This approach allows for straight forward networkapplication management, but does not necessarily allow for optimized bandwidth utilization for most users. A user of NQDS that is interested in technology stocks, and would like to subscribe to just CSCO and INTL, would have to pull down all the data for the first two groups of NQDS. Understanding the way users pull down the data and then organize it into appropriate logical groups optimizes the bandwidth for each user. In many market data applications, optimizing the data organization would be of limited value. Typically customers bring in all data into a few machines and filter the instruments. Using more groups is just more overhead for the stack and does not help the customers conserve bandwidth. Another approach might be to keep the groups down to a minimum level and use UDP port numbers to further differentiate if necessary. The other extreme would be to use just one multicast group for the entire application and then have the end user filter the data. In some situations this may be sufficient. Intermittent Sources A common issue with market data applications are servers that send data to a multicast group and then go silent for more than 3.5 minutes. These intermittent sources may cause trashing of state on the network and can introduce packet loss during the window of time when soft state and then hardware shorts are being created. PIM-Bidir or PIM-SSM The first and best solution for intermittent sources is to use PIM-Bidir for many-to-many applications and PIM-SSM for one-to-many applications. Both of these optimizations of the PIM protocol do not have any data-driven events in creating forwarding state. That means that as long as the receivers are subscribed to the streams, the network has the forwarding state created in the hardware switching path. Intermittent sources are not an issue with PIM-Bidir and PIM-SSM. Null Packets In PIM-SM environments a common method to make sure forwarding state is created is to send a burst of null packets to the multicast group before the actual data stream. The application must efficiently ignore these null data packets to ensure it does not affect performance. The sources must only send the burst of packets if they have been silent for more than 3 minutes. A good practice is to send the burst if the source is silent for more than a minute. Many financials send out an initial burst of traffic in the morning and then all well-behaved sources do not have problems. Periodic Keepalives or Heartbeats An alternative approach for PIM-SM environments is for sources to send periodic heartbeat messages to the multicast groups. This is a similar approach to the null packets, but the packets can be sent on a regular timer so that the forwarding state never expires. S, G Expiry Timer Finally, Cisco has made a modification to the operation of the S, G expiry timer in IOS. There is now a CLI knob to allow the state for a S, G to stay alive for hours without any traffic being sent. The (S, G) expiry timer is configurable. This approach should be considered a workaround until PIM-Bidir or PIM-SSM is deployed or the application is fixed. RTCP Feedback A common issue with real time voice and video applications that use RTP is the use of RTCP feedback traffic. Unnecessary use of the feedback option can create excessive multicast state in the network. If the RTCP traffic is not required by the application it should be avoided. Fast Producers and Slow Consumers Today many servers providing market data are attached at Gigabit speeds, while the receivers are attached at different speeds, usually 100Mbps. This creates the potential for receivers to drop packets and request re-transmissions, which creates more traffic that the slowest consumers cannot handle, continuing the vicious circle. The solution needs to be some type of access control in the application that limits the amount of data that one host can request. QoS and other network functions can mitigate the problem, but ultimately the subscriptions need to be managed in the application. Tibco Heartbeats TibcoRV has had the ability to use IP multicast for the heartbeat between the TICs for many years. However, there are some brokerage houses that are still using very old versions of TibcoRV that use UDP broadcast support for the resiliency. This limitation is often cited as a reason to maintain a Layer 2 infrastructure between TICs located in different data centers. These older versions of TibcoRV should be phased out in favor of the IP multicast supported versions. Multicast Forwarding Options PIM Sparse Mode The standard IP multicast forwarding protocol used today for market data delivery is PIM Sparse Mode. It is supported on all Cisco routers and switches and is well understood. PIM-SM can be used in all the network components from the exchange, FSP, and brokerage. There are, however, some long-standing issues and unnecessary complexity associated with a PIM-SM deployment that could be avoided by using PIM-Bidir and PIM-SSM. These are covered in the next sections. The main components of the PIM-SM implementation are: PIM Sparse Mode v2 Shared Tree (spt-threshold infinity) A design option in the brokerage or in the exchange. Best Programming Language for Algorithmic Trading Systems One of the most frequent questions I receive in the QS mailbag is What is the best programming language for algorithmic trading. The short answer is that there is no best language. Strategy parameters, performance, modularity, development, resiliency and cost must all be considered. This article will outline the necessary components of an algorithmic trading system architecture and how decisions regarding implementation affect the choice of language. Firstly, the major components of an algorithmic trading system will be considered, such as the research tools, portfolio optimiser, risk manager and execution engine. Subsequently, different trading strategies will be examined and how they affect the design of the system. In particular the frequency of trading and the likely trading volume will both be discussed. Once the trading strategy has been selected, it is necessary to architect the entire system. This includes choice of hardware, the operating system(s) and system resiliency against rare, potentially catastrophic events. While the architecture is being considered, due regard must be paid to performance - both to the research tools as well as the live execution environment. What Is The Trading System Trying To Do Before deciding on the best language with which to write an automated trading system it is necessary to define the requirements. Is the system going to be purely execution based Will the system require a risk management or portfolio construction module Will the system require a high-performance backtester For most strategies the trading system can be partitioned into two categories: Research and signal generation. Research is concerned with evaluation of a strategy performance over historical data. The process of evaluating a trading strategy over prior market data is known as backtesting . The data size and algorithmic complexity will have a big impact on the computational intensity of the backtester. CPU speed and concurrency are often the limiting factors in optimising research execution speed. Signal generation is concerned with generating a set of trading signals from an algorithm and sending such orders to the market, usually via a brokerage. For certain strategies a high level of performance is required. IO issues such as network bandwidth and latency are often the limiting factor in optimising execution systems. Thus the choice of languages for each component of your entire system may be quite different. Type, Frequency and Volume of Strategy The type of algorithmic strategy employed will have a substantial impact on the design of the system. It will be necessary to consider the markets being traded, the connectivity to external data vendors, the frequency and volume of the strategy, the trade-off between ease of development and performance optimisation, as well as any custom hardware, including co-located custom servers, GPUs or FPGAs that might be necessary. The technology choices for a low-frequency US equities strategy will be vastly different from those of a high-frequency statistical arbitrage strategy trading on the futures market. Prior to the choice of language many data vendors must be evaluated that pertain to a the strategy at hand. It will be necessary to consider connectivity to the vendor, structure of any APIs, timeliness of the data, storage requirements and resiliency in the face of a vendor going offline. It is also wise to possess rapid access to multiple vendors Various instruments all have their own storage quirks, examples of which include multiple ticker symbols for equities and expiration dates for futures (not to mention any specific OTC data). This needs to be factored in to the platform design. Frequency of strategy is likely to be one of the biggest drivers of how the technology stack will be defined. Strategies employing data more frequently than minutely or secondly bars require significant consideration with regards to performance. A strategy exceeding secondly bars (i. e. tick data) leads to a performance driven design as the primary requirement. For high frequency strategies a substantial amount of market data will need to be stored and evaluated. Software such as HDF5 or kdb are commonly used for these roles. In order to process the extensive volumes of data needed for HFT applications, an extensively optimised backtester and execution system must be used. CC (possibly with some assembler) is likely to the strongest language candidate. Ultra-high frequency strategies will almost certainly require custom hardware such as FPGAs, exchange co-location and kernalnetwork interface tuning. Research Systems Research systems typically involve a mixture of interactive development and automated scripting. The former often takes place within an IDE such as Visual Studio, MatLab or R Studio. The latter involves extensive numerical calculations over numerous parameters and data points. This leads to a language choice providing a straightforward environment to test code, but also provides sufficient performance to evaluate strategies over multiple parameter dimensions. Typical IDEs in this space include Microsoft Visual CC, which contains extensive debugging utilities, code completion capabilities (via Intellisense) and straightforward overviews of the entire project stack (via the database ORM, LINQ ) MatLab. which is designed for extensive numerical linear algebra and vectorised operations, but in an interactive console manner R Studio. which wraps the R statistical language console in a fully-fledged IDE Eclipse IDE for Linux Java and C and semi-proprietary IDEs such as Enthought Canopy for Python, which include data analysis libraries such as NumPy. SciPy. scikit-learn and pandas in a single interactive (console) environment. For numerical backtesting, all of the above languages are suitable, although it is not necessary to utilise a GUIIDE as the code will be executed in the background. The prime consideration at this stage is that of execution speed. A compiled language (such as C) is often useful if the backtesting parameter dimensions are large. Remember that it is necessary to be wary of such systems if that is the case Interpreted languages such as Python often make use of high-performance libraries such as NumPypandas for the backtesting step, in order to maintain a reasonable degree of competitiveness with compiled equivalents. Ultimately the language chosen for the backtesting will be determined by specific algorithmic needs as well as the range of libraries available in the language (more on that below). However, the language used for the backtester and research environments can be completely independent of those used in the portfolio construction, risk management and execution components, as will be seen. Portfolio Construction and Risk Management The portfolio construction and risk management components are often overlooked by retail algorithmic traders. This is almost always a mistake. These tools provide the mechanism by which capital will be preserved. They not only attempt to alleviate the number of risky bets, but also minimise churn of the trades themselves, reducing transaction costs. Sophisticated versions of these components can have a significant effect on the quality and consistentcy of profitability. It is straightforward to create a stable of strategies as the portfolio construction mechanism and risk manager can easily be modified to handle multiple systems. Thus they should be considered essential components at the outset of the design of an algorithmic trading system. The job of the portfolio construction system is to take a set of desired trades and produce the set of actual trades that minimise churn, maintain exposures to various factors (such as sectors, asset classes, volatility etc) and optimise the allocation of capital to various strategies in a portfolio. Portfolio construction often reduces to a linear algebra problem (such as a matrix factorisation) and hence performance is highly dependent upon the effectiveness of the numerical linear algebra implementation available. Common libraries include uBLAS. LAPACK and NAG for C. MatLab also possesses extensively optimised matrix operations. Python utilises NumPySciPy for such computations. A frequently rebalanced portfolio will require a compiled (and well optimised) matrix library to carry this step out, so as not to bottleneck the trading system. Risk management is another extremely important part of an algorithmic trading system. Risk can come in many forms: Increased volatility (although this may be seen as desirable for certain strategies), increased correlations between asset classes, counter-party default, server outages, black swan events and undetected bugs in the trading code, to name a few. Risk management components try and anticipate the effects of excessive volatility and correlation between asset classes and their subsequent effect(s) on trading capital. Often this reduces to a set of statistical computations such as Monte Carlo stress tests. This is very similar to the computational needs of a derivatives pricing engine and as such will be CPU-bound. These simulations are highly parallelisable (see below) and, to a certain degree, it is possible to throw hardware at the problem. Execution Systems The job of the execution system is to receive filtered trading signals from the portfolio construction and risk management components and send them on to a brokerage or other means of market access. For the majority of retail algorithmic trading strategies this involves an API or FIX connection to a brokerage such as Interactive Brokers. The primary considerations when deciding upon a language include quality of the API, language-wrapper availability for an API, execution frequency and the anticipated slippage. The quality of the API refers to how well documented it is, what sort of performance it provides, whether it needs standalone software to be accessed or whether a gateway can be established in a headless fashion (i. e. no GUI). In the case of Interactive Brokers, the Trader WorkStation tool needs to be running in a GUI environment in order to access their API. I once had to install a Desktop Ubuntu edition onto an Amazon cloud server to access Interactive Brokers remotely, purely for this reason Most APIs will provide a C andor Java interface. It is usually up to the community to develop language-specific wrappers for C, Python, R, Excel and MatLab. Note that with every additional plugin utilised (especially API wrappers) there is scope for bugs to creep into the system. Always test plugins of this sort and ensure they are actively maintained. A worthwhile gauge is to see how many new updates to a codebase have been made in recent months. Execution frequency is of the utmost importance in the execution algorithm. Note that hundreds of orders may be sent every minute and as such performance is critical. Slippage will be incurred through a badly-performing execution system and this will have a dramatic impact on profitability. Statically-typed languages (see below) such as CJava are generally optimal for execution but there is a trade-off in development time, testing and ease of maintenance. Dynamically-typed languages, such as Python and Perl are now generally fast enough. Always make sure the components are designed in a modular fashion (see below) so that they can be swapped out out as the system scales. Architectural Planning and Development Process The components of a trading system, its frequency and volume requirements have been discussed above, but system infrastructure has yet to be covered. Those acting as a retail trader or working in a small fund will likely be wearing many hats. It will be necessary to be covering the alpha model, risk management and execution parameters, and also the final implementation of the system. Before delving into specific languages the design of an optimal system architecture will be discussed. Separation of Concerns One of the most important decisions that must be made at the outset is how to separate the concerns of a trading system. In software development, this essentially means how to break up the different aspects of the trading system into separate modular components. By exposing interfaces at each of the components it is easy to swap out parts of the system for other versions that aid performance, reliability or maintenance, without modifying any external dependency code. This is the best practice for such systems. For strategies at lower frequencies such practices are advised. For ultra high frequency trading the rulebook might have to be ignored at the expense of tweaking the system for even more performance. A more tightly coupled system may be desirable. Creating a component map of an algorithmic trading system is worth an article in itself. However, an optimal approach is to make sure there are separate components for the historical and real-time market data inputs, data storage, data access API, backtester, strategy parameters, portfolio construction, risk management and automated execution systems. For instance, if the data store being used is currently underperforming, even at significant levels of optimisation, it can be swapped out with minimal rewrites to the data ingestion or data access API. As far the as the backtester and subsequent components are concerned, there is no difference. Another benefit of separated components is that it allows a variety of programming languages to be used in the overall system. There is no need to be restricted to a single language if the communication method of the components is language independent. This will be the case if they are communicating via TCPIP, ZeroMQ or some other language-independent protocol. As a concrete example, consider the case of a backtesting system being written in C for number crunching performance, while the portfolio manager and execution systems are written in Python using SciPy and IBPy . Performance Considerations Performance is a significant consideration for most trading strategies. For higher frequency strategies it is the most important factor. Performance covers a wide range of issues, such as algorithmic execution speed, network latency, bandwidth, data IO, concurrencyparallelism and scaling. Each of these areas are individually covered by large textbooks, so this article will only scratch the surface of each topic. Architecture and language choice will now be discussed in terms of their effects on performance. The prevailing wisdom as stated by Donald Knuth. one of the fathers of Computer Science, is that premature optimisation is the root of all evil. This is almost always the case - except when building a high frequency trading algorithm For those who are interested in lower frequency strategies, a common approach is to build a system in the simplest way possible and only optimise as bottlenecks begin to appear. Profiling tools are used to determine where bottlenecks arise. Profiles can be made for all of the factors listed above, either in a MS Windows or Linux environment. There are many operating system and language tools available to do so, as well as third party utilities. Language choice will now be discussed in the context of performance. C, Java, Python, R and MatLab all contain high-performance libraries (either as part of their standard or externally) for basic data structure and algorithmic work. C ships with the Standard Template Library, while Python contains NumPySciPy. Common mathematical tasks are to be found in these libraries and it is rarely beneficial to write a new implementation. One exception is if highly customised hardware architecture is required and an algorithm is making extensive use of proprietary extensions (such as custom caches). However, often reinvention of the wheel wastes time that could be better spent developing and optimising other parts of the trading infrastructure. Development time is extremely precious especially in the context of sole developers. Latency is often an issue of the execution system as the research tools are usually situated on the same machine. For the former, latency can occur at multiple points along the execution path. Databases must be consulted (disknetwork latency), signals must be generated (operating syste, kernal messaging latency), trade signals sent (NIC latency) and orders processed (exchange systems internal latency). For higher frequency operations it is necessary to become intimately familiar with kernal optimisation as well as optimisation of network transmission. This is a deep area and is significantly beyond the scope of the article but if an UHFT algorithm is desired then be aware of the depth of knowledge required Caching is very useful in the toolkit of a quantitative trading developer. Caching refers to the concept of storing frequently accessed data in a manner which allows higher-performance access, at the expense of potential staleness of the data. A common use case occurs in web development when taking data from a disk-backed relational database and putting it into memory. Any subsequent requests for the data do not have to hit the database and so performance gains can be significant. For trading situations caching can be extremely beneficial. For instance, the current state of a strategy portfolio can be stored in a cache until it is rebalanced, such that the list doesnt need to be regenerated upon each loop of the trading algorithm. Such regeneration is likely to be a high CPU or disk IO operation. However, caching is not without its own issues. Regeneration of cache data all at once, due to the volatilie nature of cache storage, can place significant demand on infrastructure. Another issue is dog-piling . where multiple generations of a new cache copy are carried out under extremely high load, which leads to cascade failure. Dynamic memory allocation is an expensive operation in software execution. Thus it is imperative for higher performance trading applications to be well-aware how memory is being allocated and deallocated during program flow. Newer language standards such as Java, C and Python all perform automatic garbage collection . which refers to deallocation of dynamically allocated memory when objects go out of scope . Garbage collection is extremely useful during development as it reduces errors and aids readability. However, it is often sub-optimal for certain high frequency trading strategies. Custom garbage collection is often desired for these cases. In Java, for instance, by tuning the garbage collector and heap configuration, it is possible to obtain high performance for HFT strategies. C doesnt provide a native garbage collector and so it is necessary to handle all memory allocationdeallocation as part of an objects implementation. While potentially error prone (potentially leading to dangling pointers) it is extremely useful to have fine-grained control of how objects appear on the heap for certain applications. When choosing a language make sure to study how the garbage collector works and whether it can be modified to optimise for a particular use case. Many operations in algorithmic trading systems are amenable to parallelisation . This refers to the concept of carrying out multiple programmatic operations at the same time, i. e in parallel. So-called embarassingly parallel algorithms include steps that can be computed fully independently of other steps. Certain statistical operations, such as Monte Carlo simulations, are a good example of embarassingly parallel algorithms as each random draw and subsequent path operation can be computed without knowledge of other paths. Other algorithms are only partially parallelisable. Fluid dynamics simulations are such an example, where the domain of computation can be subdivided, but ultimately these domains must communicate with each other and thus the operations are partially sequential. Parallelisable algorithms are subject to Amdahls Law. which provides a theoretical upper limit to the performance increase of a parallelised algorithm when subject to N separate processes (e. g. on a CPU core or thread ). Parallelisation has become increasingly important as a means of optimisation since processor clock-speeds have stagnated, as newer processors contain many cores with which to perform parallel calculations. The rise of consumer graphics hardware (predominently for video games) has lead to the development of Graphical Processing Units (GPUs), which contain hundreds of cores for highly concurrent operations. Such GPUs are now very affordable. High-level frameworks, such as Nvidias CUDA have lead to widespread adoption in academia and finance. Such GPU hardware is generally only suitable for the research aspect of quantitative finance, whereas other more specialised hardware (including Field-Programmable Gate Arrays - FPGAs) are used for (U)HFT. Nowadays, most modern langauges support a degree of concurrencymultithreading. Thus it is straightforward to optimise a backtester, since all calculations are generally independent of the others. Scaling in software engineering and operations refers to the ability of the system to handle consistently increasing loads in the form of greater requests, higher processor usage and more memory allocation. In algorithmic trading a strategy is able to scale if it can accept larger quantities of capital and still produce consistent returns. The trading technology stack scales if it can endure larger trade volumes and increased latency, without bottlenecking . While systems must be designed to scale, it is often hard to predict beforehand where a bottleneck will occur. Rigourous logging, testing, profiling and monitoring will aid greatly in allowing a system to scale. Languages themselves are often described as unscalable. This is usually the result of misinformation, rather than hard fact. It is the total technology stack that should be ascertained for scalability, not the language. Clearly certain languages have greater performance than others in particular use cases, but one language is never better than another in every sense. One means of managing scale is to separate concerns, as stated above. In order to further introduce the ability to handle spikes in the system (i. e. sudden volatility which triggers a raft of trades), it is useful to create a message queuing architecture. This simply means placing a message queue system between components so that orders are stacked up if a certain component is unable to process many requests. Rather than requests being lost they are simply kept in a stack until the message is handled. This is particularly useful for sending trades to an execution engine. If the engine is suffering under heavy latency then it will back up trades. A queue between the trade signal generator and the execution API will alleviate this issue at the expense of potential trade slippage. A well-respected open source message queue broker is RabbitMQ . Hardware and Operating Systems The hardware running your strategy can have a significant impact on the profitability of your algorithm. This is not an issue restricted to high frequency traders either. A poor choice in hardware and operating system can lead to a machine crash or reboot at the most inopportune moment. Thus it is necessary to consider where your application will reside. The choice is generally between a personal desktop machine, a remote server, a cloud provider or an exchange co-located server. Desktop machines are simple to install and administer, especially with newer user friendly operating systems such as Windows 78, Mac OSX and Ubuntu. Desktop systems do possess some significant drawbacks, however. The foremost is that the versions of operating systems designed for desktop machines are likely to require rebootspatching (and often at the worst of times). They also use up more computational resources by the virtue of requiring a graphical user interface (GUI). Utilising hardware in a home (or local office) environment can lead to internet connectivity and power uptime problems. The main benefit of a desktop system is that significant computational horsepower can be purchased for the fraction of the cost of a remote dedicated server (or cloud based system) of comparable speed. A dedicated server or cloud-based machine, while often more expensive than a desktop option, allows for more significant redundancy infrastructure, such as automated data backups, the ability to more straightforwardly ensure uptime and remote monitoring. They are harder to administer since they require the ability to use remote login capabilities of the operating system. In Windows this is generally via the GUI Remote Desktop Protocol (RDP). In Unix-based systems the command-line Secure SHell (SSH) is used. Unix-based server infrastructure is almost always command-line based which immediately renders GUI-based programming tools (such as MatLab or Excel) to be unusable. A co-located server, as the phrase is used in the capital markets, is simply a dedicated server that resides within an exchange in order to reduce latency of the trading algorithm. This is absolutely necessary for certain high frequency trading strategies, which rely on low latency in order to generate alpha. The final aspect to hardware choice and the choice of programming language is platform-independence. Is there a need for the code to run across multiple different operating systems Is the code designed to be run on a particular type of processor architecture, such as the Intel x86x64 or will it be possible to execute on RISC processors such as those manufactured by ARM These issues will be highly dependent upon the frequency and type of strategy being implemented. Resilience and Testing One of the best ways to lose a lot of money on algorithmic trading is to create a system with no resiliency . This refers to the durability of the sytem when subject to rare events, such as brokerage bankruptcies, sudden excess volatility, region-wide downtime for a cloud server provider or the accidental deletion of an entire trading database. Years of profits can be eliminated within seconds with a poorly-designed architecture. It is absolutely essential to consider issues such as debuggng, testing, logging, backups, high-availability and monitoring as core components of your system. It is likely that in any reasonably complicated custom quantitative trading application at least 50 of development time will be spent on debugging, testing and maintenance. Nearly all programming languages either ship with an associated debugger or possess well-respected third-party alternatives. In essence, a debugger allows execution of a program with insertion of arbitrary break points in the code path, which temporarily halt execution in order to investigate the state of the system. The main benefit of debugging is that it is possible to investigate the behaviour of code prior to a known crash point . Debugging is an essential component in the toolbox for analysing programming errors. However, they are more widely used in compiled languages such as C or Java, as interpreted languages such as Python are often easier to debug due to fewer LOC and less verbose statements. Despite this tendency Python does ship with the pdb. which is a sophisticated debugging tool. The Microsoft Visual C IDE possesses extensive GUI debugging utilities, while for the command line Linux C programmer, the gdb debugger exists. Testing in software development refers to the process of applying known parameters and results to specific functions, methods and objects within a codebase, in order to simulate behaviour and evaluate multiple code-paths, helping to ensure that a system behaves as it should. A more recent paradigm is known as Test Driven Development (TDD), where test code is developed against a specified interface with no implementation. Prior to the completion of the actual codebase all tests will fail. As code is written to fill in the blanks, the tests will eventually all pass, at which point development should cease. TDD requires extensive upfront specification design as well as a healthy degree of discipline in order to carry out successfully. In C, Boost provides a unit testing framework. In Java, the JUnit library exists to fulfill the same purpose. Python also has the unittest module as part of the standard library. Many other languages possess unit testing frameworks and often there are multiple options. In a production environment, sophisticated logging is absolutely essential. Logging refers to the process of outputting messages, with various degrees of severity, regarding execution behaviour of a system to a flat file or database. Logs are a first line of attack when hunting for unexpected program runtime behaviour. Unfortunately the shortcomings of a logging system tend only to be discovered after the fact As with backups discussed below, a logging system should be given due consideration BEFORE a system is designed. Both Microsoft Windows and Linux come with extensive system logging capability and programming languages tend to ship with standard logging libraries that cover most use cases. It is often wise to centralise logging information in order to analyse it at a later date, since it can often lead to ideas about improving performance or error reduction, which will almost certainly have a positive impact on your trading returns. While logging of a system will provide information about what has transpired in the past, monitoring of an application will provide insight into what is happening right now . All aspects of the system should be considered for monitoring. System level metrics such as disk usage, available memory, network bandwidth and CPU usage provide basic load information. Trading metrics such as abnormal pricesvolume, sudden rapid drawdowns and account exposure for different sectorsmarkets should also be continuously monitored. Further, a threshold system should be instigated that provides notification when certain metrics are breached, elevating the notification method (email, SMS, automated phone call) depending upon the severity of the metric. System monitoring is often the domain of the system administrator or operations manager. However, as a sole trading developer, these metrics must be established as part of the larger design. Many solutions for monitoring exist: proprietary, hosted and open source, which allow extensive customisation of metrics for a particular use case. Backups and high availability should be prime concerns of a trading system. Consider the following two questions: 1) If an entire production database of market data and trading history was deleted (without backups) how would the research and execution algorithm be affected 2) If the trading system suffers an outage for an extended period (with open positions) how would account equity and ongoing profitability be affected The answers to both of these questions are often sobering It is imperative to put in place a system for backing up data and also for testing the restoration of such data. Many individuals do not test a restore strategy. If recovery from a crash has not been tested in a safe environment, what guarantees exist that restoration will be available at the worst possible moment Similarly, high availability needs to be baked in from the start. Redundant infrastructure (even at additional expense) must always be considered, as the cost of downtime is likely to far outweigh the ongoing maintenance cost of such systems. I wont delve too deeply into this topic as it is a large area, but make sure it is one of the first considerations given to your trading system. Choosing a Language Considerable detail has now been provided on the various factors that arise when developing a custom high-performance algorithmic trading system. The next stage is to discuss how programming languages are generally categorised. Type Systems When choosing a language for a trading stack it is necessary to consider the type system . The languages which are of interest for algorithmic trading are either statically - or dynamically-typed . A statically-typed language performs checks of the types (e. g. integers, floats, custom classes etc) during the compilation process. Such languages include C and Java. A dynamically-typed language performs the majority of its type-checking at runtime. Such languages include Python, Perl and JavaScript. For a highly numerical system such as an algorithmic trading engine, type-checking at compile time can be extremely beneficial, as it can eliminate many bugs that would otherwise lead to numerical errors. However, type-checking doesnt catch everything, and this is where exception handling comes in due to the necessity of having to handle unexpected operations. Dynamic languages (i. e. those that are dynamically-typed) can often lead to run-time errors that would otherwise be caught with a compilation-time type-check. For this reason, the concept of TDD (see above) and unit testing arose which, when carried out correctly, often provides more safety than compile-time checking alone. Another benefit of statically-typed languages is that the compiler is able to make many optimisations that are otherwise unavailable to the dynamically - typed language, simply because the type (and thus memory requirements) are known at compile-time. In fact, part of the inefficiency of many dynamically-typed languages stems from the fact that certain objects must be type-inspected at run-time and this carries a performance hit. Libraries for dynamic languages, such as NumPySciPy alleviate this issue due to enforcing a type within arrays. Open Source or Proprietary One of the biggest choices available to an algorithmic trading developer is whether to use proprietary (commercial) or open source technologies. There are advantages and disadvantages to both approaches. It is necessary to consider how well a language is supported, the activity of the community surrounding a language, ease of installation and maintenance, quality of the documentation and any licensingmaintenance costs. The Microsoft. NET stack (including Visual C, Visual C) and MathWorks MatLab are two of the larger proprietary choices for developing custom algorithmic trading software. Both tools have had significant battle testing in the financial space, with the former making up the predominant software stack for investment banking trading infrastructure and the latter being heavily used for quantitative trading research within investment funds. Microsoft and MathWorks both provide extensive high quality documentation for their products. Further, the communities surrounding each tool are very large with active web forums for both. The. NET software allows cohesive integration with multiple languages such as C, C and VB, as well as easy linkage to other Microsoft products such as the SQL Server database via LINQ. MatLab also has many pluginslibraries (some free, some commercial) for nearly any quantitative research domain. There are also drawbacks. With either piece of software the costs are not insignificant for a lone trader (although Microsoft does provide entry-level version of Visual Studio for free). Microsoft tools play well with each other, but integrate less well with external code. Visual Studio must also be executed on Microsoft Windows, which is arguably far less performant than an equivalent Linux server which is optimally tuned. MatLab also lacks a few key plugins such as a good wrapper around the Interactive Brokers API, one of the few brokers amenable to high-performance algorithmic trading. The main issue with proprietary products is the lack of availability of the source code. This means that if ultra performance is truly required, both of these tools will be far less attractive. Open source tools have been industry grade for sometime. Much of the alternative asset space makes extensive use of open-source Linux, MySQLPostgreSQL, Python, R, C and Java in high-performance production roles. However, they are far from restricted to this domain. Python and R, in particular, contain a wealth of extensive numerical libraries for performing nearly any type of data analysis imaginable, often at execution speeds comparable to compiled languages, with certain caveats. The main benefit of using interpreted languages is the speed of development time. Python and R require far fewer lines of code (LOC) to achieve similar functionality, principally due to the extensive libraries. Further, they often allow interactive console based development, rapidly reducing the iterative development process. Given that time as a developer is extremely valuable, and execution speed often less so (unless in the HFT space), it is worth giving extensive consideration to an open source technology stack. Python and R possess significant development communities and are extremely well supported, due to their popularity. Documentation is excellent and bugs (at least for core libraries) remain scarce. Open source tools often suffer from a lack of a dedicated commercial support contract and run optimally on systems with less-forgiving user interfaces. A typical Linux server (such as Ubuntu) will often be fully command-line oriented. In addition, Python and R can be slow for certain execution tasks. There are mechanisms for integrating with C in order to improve execution speeds, but it requires some experience in multi-language programming. While proprietary software is not immune from dependencyversioning issues it is far less common to have to deal with incorrect library versions in such environments. Open source operating systems such as Linux can be trickier to administer. I will venture my personal opinion here and state that I build all of my trading tools with open source technologies. In particular I use: Ubuntu, MySQL, Python, C and R. The maturity, community size, ability to dig deep if problems occur and lower total cost ownership (TCO) far outweigh the simplicity of proprietary GUIs and easier installations. Having said that, Microsoft Visual Studio (especially for C) is a fantastic Integrated Development Environment (IDE) which I would also highly recommend. Batteries Included The header of this section refers to the out of the box capabilities of the language - what libraries does it contain and how good are they This is where mature languages have an advantage over newer variants. C, Java and Python all now possess extensive libraries for network programming, HTTP, operating system interaction, GUIs, regular expressions (regex), iteration and basic algorithms. C is famed for its Standard Template Library (STL) which contains a wealth of high performance data structures and algorithms for free. Python is known for being able to communicate with nearly any other type of systemprotocol (especially the web), mostly through its own standard library. R has a wealth of statistical and econometric tools built in, while MatLab is extremely optimised for any numerical linear algebra code (which can be found in portfolio optimisation and derivatives pricing, for instance). Outside of the standard libraries, C makes use of the Boost library, which fills in the missing parts of the standard library. In fact, many parts of Boost made it into the TR1 standard and subsequently are available in the C11 spec, including native support for lambda expressions and concurrency. Python has the high performance NumPySciPyPandas data analysis library combination, which has gained widespread acceptance for algorithmic trading research. Further, high-performance plugins exist for access to the main relational databases, such as MySQL (MySQLC), JDBC (JavaMatLab), MySQLdb (MySQLPython) and psychopg2 (PostgreSQLPython). Python can even communicate with R via the RPy plugin An often overlooked aspect of a trading system while in the initial research and design stage is the connectivity to a broker API. Most APIs natively support C and Java, but some also support C and Python, either directly or with community-provided wrapper code to the C APIs. In particular, Interactive Brokers can be connected to via the IBPy plugin. If high-performance is required, brokerages will support the FIX protocol . Conclusion As is now evident, the choice of programming language(s) for an algorithmic trading system is not straightforward and requires deep thought. The main considerations are performance, ease of development, resiliency and testing, separation of concerns, familiarity, maintenance, source code availability, licensing costs and maturity of libraries. The benefit of a separated architecture is that it allows languages to be plugged in for different aspects of a trading stack, as and when requirements change. A trading system is an evolving tool and it is likely that any language choices will evolve along with it.

No comments:

Post a Comment