Trading in C# - Writing a multi strategy server

Writing a trading infrastructure in C# requires a multitude of decisions – as well as using some of the less known interfaces of the .NET framework. In this post, we talk of the general architecture of our new TradeAgent. The TradeAgent is a server system that can run a multitude of strategies in multiple markets and using multiple data and trading connections at the same time.

Our base decisions explained

Before going into the technical details some explanation about the environment the TradeAgent is going to live in. Contrary to end user tools like NinjaTrader, the TradeAgent is supposed to run a significant number of strategies at the same time. We also expect multiple TradeAgent instances to work together at some point in the future – for example a number of them trading in Chicago, and some in New York.

Because of this scope, a desktop based approach – like utilized in NinjaTrader – is not feasible. This is a non-scalable approach. No risk manager can look at 20 computers at the same time. As such, our server software – the TradeAgent, because we use the suffix Agent for every server running for the Reflexo Framework – will not be desktop based. It will be a windows service that does not even have a user interface – the user interface instead being provided by both, PowerShell Cmdlets as well as a web based portal.

This fundamental approach allows us to develop one user interface that can show the risk profile, for example, for strategies running on multiple servers. The PowerShell command line interface also allows manipulation of theoretically hundreds of strategies with a simple command. This is a much harder to start, but much easier to scale interface than the desktop based approach of less professional tools. It also allows multiple people to look at the strategies at the same time.

Our basis is the .NET framework 4.5. The API into the server is written in WCF.

A multi strategy server – basic requirements

Basic requirements for a multi strategy server are simple. Sadly, simple in requirements does not always translate into easy to program. At the core, a server must be able to:

  • Start – and stop – multiple strategies.
  • Connect to one or more trading and market data providers.
  • Keep a repository of configuration information so that the strategies can be reloaded on a server reset or update.

The result is a pretty complex piece of software. Not one, though, that a professional programmer cannot write without too many problems. Handling configuration information – time for a database server This may sound counterintuitive – after all, configuration information can also be stored simply in a set of files. But – the server will also have to do accounting, record positions, orders and other elements of the operation. A database has the serious advantage of being able to do that out of the box.

Connecting to one or more providers

This is not as easy a requirement as it sounds. It has some aspects that are not easy to design properly. There are various reasons to connect to multiple data providers. On one side because one may need multiple connections to the same broker (to trade customer accounts). On the other side, multiple data feeds may need connectivity. One may also trade different instruments with different providers. At the end, for us at least, we want to get data from multiple feeds at the same time and possibly trade Futures (Rithmic Data and Order handling) and Forex (Oanda Data and order handling) in the same server instance.

Handling multiple order and data providers means that every provider must carry some sort of internal identifier and all data subscriptions as well as orders must be routed to the correct provider – again, by means of a tag. In our system. This is done by the concept of an environment. An environment defines the routing endpoints and strategies are always assigned to an environment. They then get “sessions” (a MarketDataSession and a TradingSession) based on their environment – which routes instrument subscriptions and orders to the correct enpoints.

Handling multiple strategies in .NET – requires AppDomains Strategies are a tricky point for a number of reasons:

  • They contain logic that is less tested than the infrastructure, if for anything else then because strategies have simply less development time while the infrastructure is shared.
  • There are possibly a lot more strategy instances than connectivity endpoints. We see hundreds of strategies with just 2-4 data and trading connections.
  • Strategies are less stable – while the server should run for weeks at a time (with a monthly reset in the weekend after the monthly Microsoft patches) we must be able to load and unload strategy instances repeatedly. If not for real trading, strategies that will start in simulation mode must be able to be replaced should an error appear.

The loading and unloading part makes having multiple AppDomains mandatory, because a .NET AppDomain can and never will unload a loaded class. This only ever happens on the level of an AppDomain. We also must be able to load strategy instances from different code bases. For us, this is materializing in the concept of a StrategyContainer. A StrategyContainer is a folder that is used to hold the compiled code of one or more strategies. The container then gets loaded into StrategyPool – which is the AppDomain pointing to a StrategyContainer. A StrategyPool can run multiple strategies, but they must all share the same codebase (StrategyContainer).

For performance reasons the number of StrategyPools should be kept as low as possible. There is overhead in sending data to a Pool and maintaining the necessary state in it. All strategies in a pool, for example, can share the same indicators – but this is not possible across pools. Still, for new strategies, it may be good to isolate them in separate pools (and containers) so that they can be replaced.

A critical point: Performance Metrics

Data is flowing through the system. New market events enter by the market connector (and sometimes order updates through the trading connector) and are distributed to the subscribing pools. Market data is time critical, and it is possible to overload the system quite easily. And a trading infrastructure is not overloaded then the CPU has a high average – it is overloaded when the system cannot handle the spikes in activity.

For this, we regularly enter timing messages into the system. A timing message is a timestamp that gets added processing timestamps on the way. We add a timestamp event every second, and immediately after every order event. Once finished, every timestamp measurement message contains internal timestamps showing the way through the system, including waiting times in queues as well as processing times in the strategy itself. While not perfect (the overhead to measure every message would be simply too high) this gives us a good regular measurement and times at critical times (around the order handling). Moving data between AppDomains is time critical

As can easily be seen – collecting the market data from the different providers and moving it to the strategies is the most time critical item. Throughput rates can easily reach 50.000 messages per second, and they should be moved as fast through the system as possible. Because the strategies run in their own AppDomain, this means that there is always some serialization involved. As such, choosing the proper approach to move the data through the system is extremely time critical. The most obvious .NET based approach – to simply call into the separate AppDomain for every message – is extremely inefficient. Remoting is a good technology, but not really written for these scenarios. The current best performing solution we have found involves using named pipes (which on every current version of windows actually use shared memory when both endpoints are on the same machine and are very efficient) and a high performance serialization library. Without giving up the exact way we do it - If you are less efficient than ProtoBuf, go back to the drawing board and try again. At the same time, data flowing through the system should use as few string manipulations as possible. If possible, all strings should be recoded as integers during setup. For example, we use a hierarchy of InstrumentContext management classes that manage all active instruments. Once an instrument is created, it gets a running number. All updates etc. are coded using this number – there is no need to use a string comparison during normal market data processing.

Using highly efficient serialization – handling the market data is not really challenge. Making the wrong decisions here – running even 50 strategies will lead to all kinds of weird behavior as data does queue up during main times.

Threading requires proper planning

Running a high number of strategies is complex and will involve multiple threads. Threading is a very complex scenario – too few threads leave the cores starving, while too many mean too many context switches that cost significant time. The bests approach is one that is time proven for example by Internet Information Server: Take queues for the work to be done and have an “optimal” number of threads working on them. The optimal number of threads is designed to keep the system running under optimal load (and may vary over time) – enough for all cores to be active, not so many that one is getting an insane high number of context switches. In .NET this can be done for example using the ThreadPool or – better – the Task subsystem. Just do not make one task per market message, but run a task per processing queue. The details are too complex for this (already too long) blog post – but we likely will publish our solution in a separate later post. Taking it all together – it is a complex program, but not overly so Any professional senior level developer should be able to come up with a decent architecture, even without the descriptions of this blog post. Less skilled developers will likely get lost in the threading aspect, as well as in the finer points of high performance computing in C#. After all, it is quite rare to have to deal with such time sensitive requirements. Choosing the right technology at times is a lot more about knowing what exists (for example most developers will never have heard of ProtoBuf and rely on a much slower standard .NET serialization protocol).

And this is how we build our trading infrastructure...

At NetTecture– the company behind Trade-Robots.com – we develop software for customers for many years now. We are specialized in high performance server applications, so for us it is quite “business as usual” to define an architecture for a system like this. For others – it is challenging, and even commercial products fail, as can be seen in the totally single threaded approach of many commercial trading packages.