Design do sistema de negociação de baixa latência
Cambridge, Reino Unido e ndash; 23 de junho de 2014 & ndash; A Argon Design, uma empresa de serviços de design especializada em sistemas digitais complexos, anunciou hoje que desenvolveu um sistema de negociação financeira de baixa latência para uma casa de negociação proprietária fazendo arbitragem de latência em uma das principais bolsas nas Américas.
A plataforma de negociação completa, que inclui funções de ingesta de dados de mercado em tempo real através do suporte de algoritmo à colocação de pedidos baseada em FIX, foi ao vivo em maio de 2014. Ele baseia-se nos resultados inovadores que o Argon anunciou em setembro de 2013 para o comércio de alto desempenho usando um design híbrido de FPGA e Tecnologias x86. Isso combina os caminhos rápidos implementados no FPGA para dar respostas de tick-to-trade de nível de nanosegundo a eventos-chave, juntamente com a configuração de oportunidade, determinação de parâmetros e gerenciamento de sistema em um servidor x86 de alto desempenho.
O melhor hardware de raça compreende um servidor Supermicro Hyper-Speed com processadores Dual Intel Xeon E5 montados e fornecidos pela Bios IT, bem como um interruptor Arista 7124FX com Stratix V FPGA integrado.
A lógica FPGA usa uma série de técnicas de otimização desenvolvidas pelo Argon para maximizar a vantagem de velocidade. Estes incluem análise em linha, antecipação, inferência e arbitragem de gateway. Para alimentar as várias técnicas de inferência, o FPGA inclui lógica complexa para construir e manter cadernos e estatísticas. A análise de FAST / FIX é feita em no máximo 64ns e a compilação de livro de pedidos é completada em 32ns. Para as interfaces de rede de latência mais baixas, o sistema usa Tamba Networks & rsquo; latência terminal 1G MAC.
O parceiro gerente da casa de comércio comentou: "À medida que as trocas se tornam mais deterministas, é importante ter uma plataforma que ofereça vantagem de velocidade, bem como estratégias comerciais inteligentes. O sistema Argon nos deu essa vantagem".
Steve Barlow, CTO of Argon Design, comentou: "A negociação de alto desempenho continua a ser ativa em todos os mercados mundiais. Como provavelmente se tornará mais nicho, a vitória precisará de acesso a tecnologias de alto desempenho e habilidades para selecionar e montar as peças necessárias. Em Argon acreditamos no detalhe de engenharia especializada e ndash, cada cliente é diferente e, portanto, desenvolvemos sistemas personalizados que dão a vantagem vital ".
O Argon Design foi fundado em 2009 e atua no coração do renomeado Cambridge Technology Cluster com acesso aos mercados & rsquo; inteligência principal. No setor de comércio financeiro, o Argon Design auxilia equipes internas, fornecendo habilidades especializadas ou recursos adicionais para projetos, bem como projetos completos completos em áreas como:
Arquitetura heterogênea de sistemas de hardware e software Design e produção de dispositivos Desenvolvimento / programação baseada em FPGA Projeto de hardware e software de processador de vários núcleos usando Tilera, Intel e outros Processamento de rede de desenvolvimento / programação de GPU e OpenCL.
Design do sistema de negociação de baixa latência
Obter através da App Store Leia esta publicação em nosso aplicativo!
Programação de baixa latência.
Eu tenho lido muito sobre os sistemas financeiros de baixa latência (especialmente desde o famoso caso de espionagem corporativa) e a idéia de sistemas de baixa latência tem estado em mente desde então. Há um milhão de aplicativos que podem usar o que esses caras estão fazendo, então eu gostaria de aprender mais sobre o assunto. A coisa é que não consigo encontrar nada valioso sobre o assunto. Alguém pode recomendar livros, sites, exemplos de sistemas de baixa latência?
12 Respostas.
Eu trabalho para uma empresa financeira que produz software de baixa latência para comunicação diretamente com trocas (para envio de trades e preços de transmissão). Atualmente, desenvolvemos principalmente em Java. Embora o lado de baixa latência não seja uma área na qual eu trabalho diretamente, tenho uma idéia justa da qualificação exigida, o que inclui o seguinte na minha opinião:
Conhecimento detalhado do modelo e técnicas de memória Java para evitar a coleta de lixo desnecessária (por exemplo, agrupamento de objetos). Algumas das técnicas utilizadas geralmente podem ser consideradas como "anti-padrões" em um OO-ambiente tradicional. Conhecimento detalhado de multicast TCP / IP e UDP incluindo utilitários para depuração e medição de latência (por exemplo, DTrace no Solaris). Experiência em aplicações de perfil. Conhecimento do pacote java. nio, experiência no desenvolvimento de aplicativos de servidor escaláveis baseados em NIO, experiência na criação de protocolos de fio. Observe também que, normalmente, evitamos o uso de estruturas e bibliotecas externas (por exemplo, o Google Protobuf), preferindo escrever muito código personalizado. Conhecimento das bibliotecas FIX e FIX comerciais (por exemplo, Cameron FIX).
Infelizmente, muitas das habilidades só podem ser desenvolvidas "no trabalho", pois não há substituto para a experiência adquirida implementando um servidor de preços ou um mecanismo comercial baseado em uma especificação. de uma troca ou vendedor. No entanto, também vale a pena mencionar que nossa empresa, pelo menos, tende a não procurar uma experiência específica nessas áreas de nicho (ou outras), preferindo contratar pessoas com boas habilidades analíticas e de resolução de problemas.
A baixa latência é uma função de muitas coisas, sendo as duas mais importantes:
latência da rede - ou seja, o tempo gasto na rede para transmitir / receber mensagens. latência de processamento - ou seja, o tempo gasto pelo seu aplicativo para atuar em uma mensagem / evento.
Então, se você diz que está escrevendo um sistema de Correspondência de Pedidos, a latência da rede representaria o quão breve dentro da sua rede você conseguiu receber o pedido de correspondência de pedidos. E a latência de processamento representaria o tempo de sua aplicação para coincidir com a Ordem contra ordens abertas existentes.
Multicast, UDP, multicast confiável, Kernel bypass (suportado por Java 7, Informatica Ultra Messaging e muitos outros) nas redes Infiniband são algumas das tecnologias comuns utilizadas por todas as empresas neste campo.
Além disso, existem estruturas de programação de baixa latência como disruptor (code. google/p/disruptor/) que implementam padrões de design para lidar com aplicativos de baixa latência. O que poderia matá-lo é ter que escrever em um banco de dados ou arquivos de log como parte do seu fluxo de trabalho principal. Você terá que encontrar soluções únicas que atendam aos requisitos do problema que você está tentando resolver.
Em linguagens como Java, implementar seu aplicativo de forma que ele cria (quase) zero lixo torna-se extremamente importante para a latência. Como diz Adamski, ter um conhecimento do modelo de memória Java é extremamente importante. Compreenda as diferentes implementações da JVM e suas limitações. Os padrões típicos de design Java em torno da criação de pequenos objetos são as primeiras coisas que você vai jogar fora da janela - um nunca pode consertar o coletor de lixo Java o suficiente para alcançar baixa latência - o único que pode ser corrigido é o lixo.
Bem, não é apenas uma programação "tradicional" em tempo real, é tudo. Eu trabalho para uma bolsa de valores - a velocidade é rei. um problema típico é qual a maneira mais rápida de escrever em um arquivo? a maneira mais rápida de serializar um objeto? etc.
Qualquer coisa na programação em tempo real caberia na conta. Não é exatamente o que você está procurando, eu suspeito, mas é um lugar extremamente bom para começar.
Há muitas boas respostas nesta publicação. Eu gostaria de adicionar minha experiência também.
Para obter baixa latência em java, você tem que assumir o controle do GC em java, existem muitas maneiras de fazer isso, por exemplo, pré-alocar objetos (ou seja, usar padrão de design de peso mosca), usar objetos primitivos - é bom para isso, todos os dados A estrutura é baseada em primitiva, Reuse a instância do objeto, por exemplo, crie o dicionário do sistema inteiro para reduzir a criação de novos objetos, muito boa opção ao ler dados de stream / socket / db.
Tente usar algo sem espera (o que é um pouco difícil), bloquear algo livre. Você pode encontrar toneladas de exemplos para isso.
Use computação em memória. A memória é barata, você pode ter tera byte de dados na memória.
Se você pode dominar bit-wise algo, então dá um desempenho muito bom.
Use simpatia mecânica - Consulte o disruptor lmax, excelente estrutura.
Leia os whitepapers nesse site e você terá uma visão do que é necessário para a baixa latência.
Se você estiver interessado em desenvolver Java de baixa latência, você deve saber que ele pode ser feito sem JVM RTSJ (em tempo real) desde que você mantenha o coletor de lixo sob controle. Eu sugiro que você dê uma olhada neste artigo que fala sobre Desenvolvimento Java sem sobrecarga CG. Também temos muitos outros artigos em nosso site que falam sobre componentes Java de baixa latência.
Gostaria de comentar sobre programação de baixa latência. Atualmente tenho mais de 5 anos de experiência no desenvolvimento de baixa latência e motores de alta execução em software financeiro.
É necessário entender o que é latência?
Latência significa que precisa de tempo para completar seu processo. Não depende necessariamente das ferramentas de desenvolvimento que você está usando, como java, c ++, etc., depende de suas habilidades de programação e sistema.
Suponha que você esteja usando java, mas um erro pode fazer um atraso no processo. Por exemplo, você desenvolveu um aplicativo comercial em que, em cada atualização de preço, você chama algumas funções e assim por diante. Isso pode resultar em variáveis extras, uso de memória desnecessário, loops desnecessários que podem causar atraso no processo. O mesmo aplicativo desenvolvido pode ser melhor do que o java se o desenvolvedor se importasse com os erros acima.
Também depende do seu sistema de servidor, como o sistema multiprocessador pode funcionar bem se sua aplicação for multi-thread.
Se eu me lembro corretamente, em tempo real, Java (RTSJ) é usado nesta área, embora não consegui encontrar um artigo bom para vincular até agora.
Normalmente, trabalhar em ambientes de baixa latência significa ter uma compreensão das dependências de chamadas e como reduzi-las para minimizar a cadeia de dependência. Isso inclui o uso de estruturas de dados e bibliotecas para armazenar os dados armazenados em cache desejados, bem como refatorar recursos existentes para reduzir interdependências.
Design do sistema de negociação de baixa latência
Obter através da App Store Leia esta publicação em nosso aplicativo!
Quão rápido são os sistemas de negociação HFT de última geração hoje?
Todo o tempo que você ouve sobre comércio de alta freqüência (HFT) e quão rápido são os algoritmos. Mas estou pensando - o que é rápido nos dias de hoje?
Não estou pensando na latência causada pela distância física entre uma troca e o servidor que executa um aplicativo comercial, mas a latência introduzida pelo próprio programa.
Para ser mais específico: qual é o tempo decorrido dos eventos que chegam no fio em um aplicativo para esse aplicativo, emite um pedido / preço no fio? Isto é, hora do tic-to-trade.
Estamos falando sub-milissegundo? Ou sub-microssegundo?
Como as pessoas conseguem essas latências? Codificação em montagem? FPGAs? Código de C ++ bom e antigo?
Foi recentemente publicado um artigo interessante sobre o ACM, fornecendo muitos detalhes sobre a tecnologia HFT de hoje, que é uma excelente leitura:
Você recebeu excelentes respostas. Há um problema, porém - a maioria das algotrading é um segredo. Você simplesmente não sabe o quão rápido é. Isso vai nos dois sentidos - alguns podem não dizer o quão rápido eles funcionam, porque eles não querem. Outros podem, digamos, "exagerar", por muitas razões (atraindo investidores ou clientes, por um).
Os rumores sobre picossegundos, por exemplo, são bastante escandalosos. 10 nanosegundos e 0,1 nanosegundos são exatamente a mesma coisa, porque o tempo necessário para que a ordem atinja o servidor de negociação seja muito mais do que isso.
E, o mais importante, embora não seja o que você perguntou, se você tentar negociar algorítmicamente, não tente ser mais rápido, tente ser mais inteligente. Eu vi algoritmos muito bons que podem lidar com segundos de latência e ganhar muito dinheiro.
Eu sou o CTO de uma pequena empresa que fabrica e vende sistemas HFT baseados em FPGA. Construindo nossos sistemas no topo do Solarflare Application Onload Engine (AOE), estamos constantemente oferecendo latência de um evento de mercado "interessante" no fio (10Gb / S UDP market data feed de ICE ou CME) para o primeiro byte do mensagem de ordem resultante atingindo o fio na faixa de 750 a 800 nanosegundos (sim, submicosegundo). Nós antecipamos que nossos sistemas de próxima versão estarão na faixa de 704 a 710 nanosegundos. Algumas pessoas reivindicaram um pouco menos, mas isso é em um ambiente de laboratório e na verdade não está sentado em uma COLO em Chicago e limpa as ordens.
Os comentários sobre física e "velocidade da luz" são válidos, mas não relevantes. Todo mundo que é sério sobre a HFT tem seus servidores em um COLO na sala ao lado do servidor da troca.
Para entrar neste domínio sub-microsegundo, você não pode fazer muito na CPU do host, exceto os comandos de implementação da estratégia de alimentação para o FPGA, mesmo com tecnologias como o bypass do kernel você tem 1.5 microssegundos de despesas gerais inevitáveis. então neste domínio tudo está jogando com FPGAs.
Uma das outras respostas é muito honesta ao dizer que, neste mercado altamente secreto, poucas pessoas falam sobre as ferramentas que eles usam ou seu desempenho. Cada um de nossos clientes exige que nem digamos a ninguém que eles usem nossas ferramentas nem divulguem nada sobre como elas as usam. Isso não só dificulta o marketing, mas também evita o bom fluxo de conhecimento técnico entre colegas.
Devido a esta necessidade de entrar em sistemas exóticos para a parte do mercado "wicked fast", você descobrirá que os Quants (as pessoas que aparecem nos algoritmos que fazemos rápido) estão dividindo seus algos em eventos-a - camadas de tempo de resposta. No topo da tecnologia, o heap é o sistema de microsecondos secundários (como o nosso). A próxima camada são os sistemas C ++ personalizados que fazem uso intenso do bypass do kernel e estão na faixa de 3-5 microsegundos. A próxima camada são as pessoas que não podem se dar ao luxo de estar em um fio de 10Gb / S apenas um roteador de lúpulo da "troca", eles podem estar ainda em COLO, mas por causa de um jogo desagradável que chamamos de "roleta de porta" eles estão no dezenas de centenas de microsecondos. Uma vez que você entra em milissegundos, quase não é HFT.
"sub-40 microssegundos", se você quiser acompanhar a Nasdaq. Esta figura é publicada aqui nasdaqomx / technology /
Bom artigo que descreve o estado do HFT (em 2011) e fornece algumas amostras de soluções de hardware que tornam possível o nanossegundo: as ruas de parede precisam de velocidade de negociação: a era de nanosegundo.
Com a corrida pela menor "latência" continuando, alguns participantes do mercado estão falando sobre picossegundos - trilhões de segundo.
EDIT: Como Nicholas mencionou gentilmente:
O link menciona uma empresa, a Fixnetix, que pode "preparar um comércio" em 740ns (ou seja, o tempo de um evento de entrada ocorre a uma ordem que está sendo enviada).
Para o que vale a pena, o produto de mensagens FTL da TIBCO é sub-500 ns para dentro de uma máquina (memória compartilhada) e alguns micro segundos usando RDMA (Remote Direct Memory Access) dentro de um data center. Depois disso, a física se torna a principal parte da equação.
Então, essa é a velocidade com que os dados podem ser obtidos a partir do feed para o aplicativo que toma decisões.
Pelo menos um sistema reivindicou.
30ns mensagens interthread, que provavelmente é um benchmark tweaked up, então qualquer um que fala sobre números mais baixos está usando algum tipo de CPU mágica.
Uma vez que você está no aplicativo, é apenas uma questão de quão rápido o programa pode tomar decisões.
Hoje em dia, o tic-to-trade de um dígito em microssegundos é a barra para empresas HFT competitivas. Você deve poder fazer dígitos únicos altos usando apenas o software. Então & lt; 5 usec com hardware adicional.
O comércio de alta freqüência ocorreu pelo menos desde 1999, depois que a Bolsa de Valores dos EUA (SEC) autorizou o intercâmbio eletrônico em 1998. Na virada do século 21, os negócios da HFT tiveram um tempo de execução de vários segundos, enquanto que até 2010 diminuiu em milissegundos e até mesmo em microssegundos.
Nunca será inferior a alguns microsegundos, devido ao limite de velocidade de em-w / luz, e apenas um número sortudo, que deve estar em menos de um quilômetro de distância, pode até sonhar em aproximar-se disso.
Além disso, não há codificação, para alcançar essa velocidade, você deve se tornar físico ... (o cara com o artigo com o interruptor 300ns, que é apenas a latência adicional desse switch!, Igual a 90m de viagem através de um óptico e um pouco menos em cobre)
11 Práticas recomendadas para sistemas de baixa latência.
Foram 8 anos desde que o Google notou que um extra de 500ms de latência caiu o tráfego em 20% e a Amazon percebeu que 100ms de latência extra caíram as vendas em 1%. Desde então, os desenvolvedores estiveram correndo para o fundo da curva de latência, culminando em desenvolvedores de front-end espremendo todos os últimos milésimos de segundo de seu JavaScript, CSS e HTML. O que se segue é uma caminhada aleatória através de uma variedade de práticas recomendadas para ter em mente ao projetar sistemas de baixa latência. A maioria dessas sugestões são levadas ao extremo lógico, mas é claro que podem ser feitas compensações. (Obrigado a um usuário anônimo por fazer esta pergunta no Quora e me fazer colocar meus pensamentos por escrito).
Escolha o idioma certo.
As linguagens de script não precisam se aplicar. Embora eles continuem ficando cada vez mais rápidos, quando você está olhando para raspar os últimos milissegundos do seu tempo de processamento, você não pode ter a sobrecarga de um idioma interpretado. Além disso, você vai querer um modelo de memória forte para habilitar a programação sem bloqueio para que você esteja olhando Java, Scala, C ++ 11 ou Go.
Mantenha tudo na memória.
I / O irá matar sua latência, portanto, certifique-se de que todos os seus dados estão na memória. Isso geralmente significa gerenciar suas próprias estruturas de dados em memória e manter um registro persistente, para que você possa reconstruir o estado após uma reinicialização de máquina ou processo. Algumas opções para um registro persistente incluem Bitcask, Krati, LevelDB e BDB-JE. Alternativamente, você pode fugir com a execução de um banco de dados local, persistente na memória, como redis ou MongoDB (com dados da memória & gt; & gt;). Observe que você pode perder alguns dados sobre falhas devido à sua sincronização de fundo no disco.
Mantenha dados e processamento colocados.
O lúpulo de rede é mais rápido do que o disco, mas mesmo assim eles vão adicionar muitas despesas gerais. Idealmente, seus dados devem caber inteiramente na memória em um host. Com a AWS fornecendo quase 1/4 TB de RAM na nuvem e servidores físicos que oferecem múltiplas TBs, isso geralmente é possível. Se você precisa executar em mais de um host, você deve garantir que seus dados e solicitações sejam adequadamente particionados para que todos os dados necessários para atender uma determinada solicitação estejam disponíveis localmente.
Mantenha o sistema subutilizado.
A baixa latência requer sempre recursos para processar a solicitação. Não tente executar no limite do que seu hardware / software pode fornecer. Sempre tem muita sala de cabeça para rajadas e depois algumas.
Mantenha os parâmetros de contexto ao mínimo.
Os switches de contexto são um sinal de que você está fazendo mais trabalho de computação do que você tem recursos. Você quer limitar seu número de segmentos ao número de núcleos em seu sistema e colocar cada segmento no seu núcleo.
Mantenha suas leituras seqüenciais.
Todas as formas de armazenamento, murchar é rotacional, com base em flash ou memória, melhoram significativamente quando usado sequencialmente. Ao emitir leituras seqüenciais para a memória, você desencadeia o uso da pré-busca no nível de RAM, bem como no nível de cache da CPU. Se for feito corretamente, a próxima peça de dados que você precisa estará sempre em cache L1 antes de precisar. A maneira mais fácil de ajudar esse processo é fazer uso intenso de matrizes de tipos ou tipos de dados primitivos. Os ponteiros a seguir, seja através do uso de listas vinculadas ou através de matrizes de objetos, devem ser evitados a todo custo.
Lote suas escritas.
Isso parece contraintuitivo, mas você pode obter melhorias significativas no desempenho através de gravações por lotes. No entanto, existe um equívoco de que isso significa que o sistema deve aguardar uma quantidade arbitrária de tempo antes de escrever. Em vez disso, um segmento deve girar em um loop apertado fazendo I / O. Cada escrita irá lote todos os dados que chegaram desde a última gravação emitida. Isso faz com que um sistema muito rápido e adaptável.
Respeite seu cache.
Com todas essas otimizações no local, o acesso à memória rapidamente se torna um gargalo. Fixar threads para seus próprios núcleos ajuda a reduzir a poluição do cache da CPU e as E / S seqüenciais também ajudam a pré-carregar o cache. Além disso, você deve manter os tamanhos de memória para baixo usando tipos de dados primitivos para que mais dados se encaixem no cache. Além disso, você pode procurar algoritmos cache-inconscientes que funcionam recursivamente, quebrando os dados até que ele se encaixe no cache e depois faça qualquer processamento necessário.
Não bloqueando o máximo possível.
Faça com que os amigos não bloqueiem e aguardem estruturas e algoritmos de dados gratuitos. Toda vez que você usa um bloqueio, você precisa baixar a pilha para o sistema operacional para mediar o bloqueio, que é uma enorme sobrecarga. Muitas vezes, se você sabe o que está fazendo, você pode contornar os bloqueios através da compreensão do modelo de memória da JVM, C ++ 11 ou Go.
Async, tanto quanto possível.
Qualquer processamento e particularmente qualquer E / S que não seja absolutamente necessário para a construção da resposta deve ser feito fora do caminho crítico.
Paralelize o máximo possível.
Qualquer processamento e particularmente qualquer E / S que possa acontecer em paralelo deve ser feito em paralelo. Por exemplo, se sua estratégia de alta disponibilidade inclui o log de transações para o disco e o envio de transações para um servidor secundário, essas ações podem acontecer em paralelo.
Quase tudo isso vem de seguir o que o LMAX está fazendo com seu projeto Disruptor. Leia sobre isso e siga tudo o que Martin Thompson faz.
Compartilhar isso:
Relacionados.
Publicado por.
Benjamin Darfler.
29 pensamentos sobre & ldquo; 11 melhores práticas para sistemas de baixa latência & rdquo;
E feliz em estar na sua lista 🙂
Bom artigo. One beef: Go doesn & # 8217; t tem um modelo de memória sofisticado como Java ou C ++ 11. Se o seu sistema se encaixa com a rotina da rotina e a arquitetura dos canais, é bom demais, sem sorte. O AFAIK não é possível excluir o agendador de tempo de execução, portanto, não há falhas de sistema operacional nativas e a capacidade de criar suas próprias estruturas de dados livres de bloqueio como (colunas SPSC / anejadores) também faltam severamente.
Obrigado pela resposta. Embora o modelo de memória Go (golang / ref / mem) possa não ser tão robusto quanto o Java ou o C ++ 11, tive a impressão de que você ainda poderia criar estruturas de dados sem bloqueio usando isso. Por exemplo, github / textnode / gringo, github / scryner / lfreequeue e github / mocchira / golfhash. Talvez eu estivesse faltando alguma coisa? É certo que eu sei muito menos sobre o Go do que a JVM.
Benjamin, o modelo de memória Go detalhado aqui: golang / ref / mem é principalmente em termos de canais e mutexes. Eu olhei através dos pacotes que você listou e enquanto as estruturas de dados existem & # 8220; lock free & # 8221; eles não são equivalentes ao que um pode construir em Java / C ++ 11. O pacote de sincronização a partir de agora, não tem suporte para átomos relaxados ou a semântica de aquisição / lançamento do C ++ 11. Sem esse suporte, é difícil construir estruturas de dados SPSC tão eficientes quanto as possíveis em C ++ / Java. Os projetos que você liga usam atomic. Add & # 8230; que é um átomo consistente consecutivamente. Ele é construído com XADD como deveria ser # 8211; github / tonnerre / golang / blob / master / src / pkg / sync / atomic / asm_amd64.s.
Eu não estou tentando derrubar Ir para baixo. É preciso um esforço mínimo para escrever IO assíncrono e concorrente.
código suficientemente rápido para a maioria das pessoas. A biblioteca std também está altamente ajustada para o desempenho. A Golang também tem suporte para estruturas que estão faltando em Java. Mas, como está, penso que o modelo de memória simplista e o tempo de execução da rotina estão no caminho da construção do tipo de sistemas de que você está falando.
Obrigado pela resposta em profundidade. Espero que as pessoas achem isso útil.
Enquanto um & # 8216; native & # 8217; O idioma provavelmente é melhor, não é estritamente necessário. O Facebook nos mostrou que pode ser feito em PHP. Concedido eles usam o PHP pré-compilado com sua máquina HHVM. Mas é possível!
Infelizmente, o PHP ainda não possui um modelo de memória aceitável, mesmo que o HHVM melhore significativamente a velocidade de execução.
Enquanto eu lutarei para usar linguagens de nível superior, tanto quanto o próximo cara, acho que a única maneira de alcançar os aplicativos de baixa latência que as pessoas estão procurando é deslizar para um idioma como C. Parece que a mais difícil é escrever em um idioma, mais rápido ele executa.
Eu recomendo que você olhe para o trabalho que está sendo feito nos projetos e blogs aos quais eu liguei. A JVM está rapidamente se tornando o ponto quente para esses tipos de sistemas porque fornece um modelo de memória forte e uma coleta de lixo que permitem a programação sem bloqueio quase impossivel com um modelo de memória fraco ou indefinido e contadores de referência para gerenciamento de memória.
Olharei, Benjamin. Obrigado por apontá-los.
A coleta de lixo para programação sem bloqueio é um pouco de um deus ex machina. As filas MPMC e SPSC podem ser criadas sem necessidade de GC. Há também muitas maneiras de fazer programação sem bloqueio sem coleta de lixo e a contagem de referências não é a única maneira. Os ponteiros de perigo, RCU, Proxy-Collectors, etc, fornecem suporte para recuperação diferida e geralmente são codificados em suporte de um algoritmo (não genérico), portanto, eles geralmente são muito mais fáceis de construir. É claro que o trade-off reside no fato de que os GCs de qualidade de produção têm muito trabalho colocado neles e ajudarão o programador menos experiente a escrever algoritmos sem bloqueio (eles deveriam estar fazendo isso?) Sem codificação de esquemas de recuperação diferida . Alguns links sobre o trabalho realizado neste campo: cs. toronto. edu/
Sim C / C ++ recentemente ganhou um modelo de memória, mas isso não significa que eles eram completamente inadequados para o código sem bloqueio anteriormente. O GCC e outros compiladores de alta qualidade tinham diretrizes específicas do compilador para fazer programação gratuita de bloqueio em plataformas suportadas por um tempo realmente grande # 8211; não era padronizado na língua. Linux e outras plataformas forneceram essas primitivas por algum tempo também. A posição única de Java foi que forneceu um modelo de memória formalizado que garantiu trabalhar em todas as plataformas suportadas. Embora, em princípio, isso seja incrível, a maioria dos desenvolvedores do lado do servidor trabalham em uma plataforma (Linux / Windows). Eles já tinham as ferramentas para criar código sem bloqueio para sua plataforma.
GC é uma ótima ferramenta, mas não é necessária. Tem um custo tanto em termos de desempenho como em complexidade (todos os truques necessários para evitar STW GC). C ++ 11 / C11 já possui suporte para modelos de memória adequados. Não vamos esquecer que as JVMs não têm responsabilidade em suportar a API insegura no futuro. O código inseguro é & # 8220; unsafe & # 8221; então você perde os benefícios das características de segurança da Java. Finalmente, o código inseguro usado para criar memória e simular estruturas em Java parece muito mais feio do que as estruturas C / C ++ onde o compilador está fazendo isso funciona de maneira confiável. C e C ++ também fornecem acesso a todas as ferramentas elétricas específicas de plataforma de baixo nível, como PAUSE ins, SSE / AVX / NEON etc. Você pode até ajustar seu layout de código através de scripts de linker! O poder fornecido pela cadeia de ferramentas C / C ++ é realmente incomparável pela JVM. O Java é uma ótima plataforma, no entanto, acho que a maior vantagem é que a lógica comercial comum (90% do seu código?) Ainda pode depender do GC e dos recursos de segurança e fazer uso de bibliotecas altamente sintonizadas e testadas escritas com inseguro. Este é um grande trade-off entre obter os últimos 5% de perf e ser produtivo. Um trade-off que faz sentido para muitas pessoas, mas um trade-off, no entanto. Escrever um código de aplicação complicado em C / C ++ é um pesadelo depois de tudo.
No dia 10 de março de 2014 às 12:52, CodeDependents escreveu:
& gt; Graham Swan comentou: "Tenho uma olhada, Benjamin. Obrigado por & gt; apontando para fora. & # 8221;
Falta o 12: Não use linguagens coletadas Garbadge. GC é um gargalo na piora. Provavelmente, interrompe todos os tópicos. É um global. Isso distrai o arquiteto para gerenciar um dos recursos mais craterais (CPU-near memory).
Na verdade, muito deste trabalho vem diretamente de Java. Para fazer uma programação livre de bloqueio, você precisa de um modelo de memória claro, que c ++ recentemente ganhou recentemente. Se você sabe trabalhar com GC e não contra isso, você pode criar sistemas de baixa latência com muita facilidade.
Eu tenho que concordar com Ben aqui. Houve muitos progressos no paralelismo do GC na última década, ou seja, com o coletor G1 sendo o incantation mais recente. Pode levar um pouco de tempo para sintonizar o heap e vários botões para obter o GC para coletar com quase nenhuma pausa, mas isso contrasta em comparação com o tempo de desenvolvimento necessário para não ter GC.
Você pode até dar um passo adiante e criar sistemas que produzem tão pouco lixo que você pode facilmente empurrar o seu GC fora da sua janela de operação. É assim que todas as lojas comerciais de alta freqüência o fazem quando são executados na JVM.
A coleta de lixo para programação sem bloqueio é um pouco de um deus ex machina. As filas MPMC e SPSC podem ser criadas sem necessidade de GC. Há também muitas maneiras de fazer programação sem bloqueio sem coleta de lixo e a contagem de referências não é a única maneira. Os ponteiros de perigo, RCU, Proxy-Collectors, etc, fornecem suporte para recuperação diferida e são codificados em suporte de um algoritmo (não genérico), portanto, eles são muito mais fáceis de construir. É claro que o trade-off reside no fato de que os GCs de qualidade de produção têm muito trabalho colocado neles e ajudarão o programador menos experiente a escrever algoritmos sem bloqueio (eles deveriam estar fazendo isso?) Sem codificação de esquemas de recuperação diferida . Alguns links sobre o trabalho realizado neste campo: cs. toronto. edu/
Sim C / C ++ recentemente ganhou um modelo de memória, mas isso não significa que eles eram completamente inadequados para o código sem bloqueio anteriormente. O GCC e outros compiladores de alta qualidade tinham diretrizes específicas do compilador para fazer programação gratuita de bloqueio em plataformas suportadas por um tempo realmente grande # 8211; não era padronizado na língua. Linux e outras plataformas forneceram essas primitivas por algum tempo também. A posição única de Java foi que forneceu um modelo de memória formalizado que garantiu trabalhar em todas as plataformas suportadas. Embora, em princípio, isso seja incrível, a maioria dos desenvolvedores do lado do servidor trabalham em uma plataforma (Linux / Windows). Eles já tinham as ferramentas para criar código sem bloqueio para sua plataforma.
GC é uma ótima ferramenta, mas não é necessária. Tem um custo tanto em termos de desempenho quanto em complexidade (todos os truques necessários para atrasar e evitar STW GC). C ++ 11 / C11 já possui suporte para modelos de memória adequados. Não vamos esquecer que as JVMs não têm responsabilidade em suportar a API insegura no futuro. O código inseguro é & # 8220; unsafe & # 8221; então você perde os benefícios das características de segurança da Java. Finalmente, o código inseguro usado para criar memória e simular estruturas em Java parece muito mais feio do que as estruturas C / C ++ onde o compilador está fazendo isso funciona de maneira confiável. C e C ++ também fornecem acesso a todas as ferramentas elétricas específicas de plataforma de baixo nível, como PAUSE ins, SSE / AVX / NEON etc. Você pode até ajustar seu layout de código através de scripts de linker! O poder fornecido pela cadeia de ferramentas C / C ++ é realmente incomparável pela JVM. O Java é uma ótima plataforma, no entanto, acho que a maior vantagem é que a lógica comercial comum (90% do seu código?) Ainda pode depender do GC e dos recursos de segurança e fazer uso de bibliotecas altamente sintonizadas e testadas escritas com inseguro. Este é um grande trade-off entre obter os últimos 5% de perf e ser produtivo. Um trade-off que faz sentido para muitas pessoas, mas um trade-off, no entanto. Escrever um código de aplicação complicado em C / C ++ é um pesadelo depois de tudo.
& gt; Não use linguagens coletadas garbadge.
Ou, pelo menos, & # 8220; tradicional & # 8221; Lixo coletado línguas. Porque eles são diferentes & # 8211; enquanto Erlang também tem um colecionador, não criou gargalos porque não pára o mundo & # 8217; t & # 8220; pára o mundo & # 8221; como Java, enquanto colecionava lixo e # 8211; em vez disso, interrompe os microcréditos pequenos individuais & # 8220; & # 8221; em uma escala de microssegunda, portanto, não é visível no grande.
Reescreva isso para & # 8220; tradicional & # 8221; algoritmos [i] de coleta de lixo [/ i]. Na LMAX usamos o Azul Zing, e apenas usando uma JVM diferente com uma abordagem diferente para a coleta de lixo, vimos grandes melhorias no desempenho, porque os GCs maiores e menores são ordens de magnitude mais baratas.
Existem outros custos que compensam isso, é claro: você usa um monte muito mais, e o Zing não é barato.
Reblogged this em Java Prorgram Exemplos e comentou:
Um dos artigos de leitura obrigatória para programadores Java, é a lição que você aprenderá depois de passar um tempo considerável de afinação e desenvolver sistemas de baixa latência em Java em 10 minutos.
Revivendo um tópico antigo, mas (incrivelmente) isso deve ser apontado:
1) Linguagens de nível superior (por exemplo, Java) não desejam a funcionalidade do hardware que não está disponível para idiomas de nível inferior (por exemplo, C); declarar que assim e assim é completamente impossível & # 8221; em C, facilmente acessível em Java, é um lixo completo sem reconhecer que o Java é executado em hardware virtual onde a JVM deve sintetizar a funcionalidade exigida pelo Java, mas não fornecida pelo hardware físico. Se uma JVM (por exemplo, escrita em C) pode sintetizar a funcionalidade X, então também pode um programador C.
2) & # 8220; Lock free & # 8221; não é o que as pessoas pensam, exceto quase por coincidência em certas circunstâncias, como o único núcleo x86; multicore x86 não pode ser executado sem bloqueio sem barreiras de memória, que tem complexidades e custos semelhantes ao bloqueio regular. De acordo com 1 acima, se o Lock Free funcionar em um determinado ambiente, é porque ele é suportado pelo hardware, ou emulado / sintetizado por software em um ambiente virtual.
Great Points Julius. O ponto que eu estava tentando (talvez sem sucesso) é que é proibitivamente difícil aplicar muitos desses padrões em C, pois eles dependem do GC. Isso vai além do simples uso de barreiras de memória. Você também deve considerar a liberação de memória, o que fica particularmente difícil quando você está lidando com algoritmos livres de segurança e sem espera. É aqui que o GC adiciona uma grande vitória. Dito isto, eu ouço que Rust tenha algumas idéias muito interessantes sobre a propriedade da memória que possam começar a abordar algumas dessas questões.
Arquitetura do piso comercial.
Idiomas disponíveis.
Opções de download.
Veja com o Adobe Reader em uma variedade de dispositivos.
Índice.
Arquitetura do piso comercial.
Visão geral executiva.
O aumento da concorrência, o maior volume de dados do mercado e as novas exigências regulatórias são algumas das forças motrizes que deram origem às mudanças na indústria. As empresas estão tentando manter sua vantagem competitiva mudando constantemente suas estratégias de negociação e aumentando a velocidade de negociação.
Uma arquitetura viável deve incluir as tecnologias mais recentes dos domínios de rede e de aplicativos. Tem que ser modular para fornecer um caminho gerenciável para evoluir cada componente com uma interrupção mínima no sistema geral. Portanto, a arquitetura proposta por este artigo é baseada em uma estrutura de serviços. Examinamos serviços como mensagens de latência ultra-baixa, monitoramento de latência, multicast, computação, armazenamento, virtualização de dados e aplicativos, resiliência comercial, mobilidade comercial e thin client.
A solução para os requisitos complexos da plataforma de negociação da próxima geração deve ser construída com uma mentalidade holística, cruzando os limites dos silos tradicionais, como negócios e tecnologia ou aplicativos e redes.
O objetivo principal deste documento é fornecer diretrizes para a construção de uma plataforma de negociação de latência ultra baixa, ao mesmo tempo em que otimizamos o débito bruto e a taxa de mensagens tanto para os dados de mercado como para os pedidos de negociação FIX.
Para conseguir isso, estamos propondo as seguintes tecnologias de redução de latência:
• Conexão de alta velocidade interconectada ou InfiniBand ou 10 Gbps para o cluster de negociação.
• Autocarro de mensagens de alta velocidade.
• Aceleração de aplicativos via RDMA sem reconexão de aplicativo.
• Monitoramento de latência em tempo real e re-direção do tráfego comercial para o caminho com menor latência.
Tendências e desafios da indústria.
As arquiteturas de negociação de próxima geração precisam responder ao aumento das demandas de velocidade, volume e eficiência. Por exemplo, espera-se que o volume de dados de mercado de opções seja duplicado após a introdução das opções de negociação de penny em 2007. Existem também exigências regulatórias para a melhor execução, que exigem o manuseio de atualizações de preços a taxas que se aproximam de 1M msg / seg. para trocas. Eles também exigem visibilidade sobre o frescor dos dados e prova de que o cliente obteve a melhor execução possível.
No curto prazo, a velocidade de negociação e inovação são diferenciadores-chave. Um número crescente de negociações é tratada por aplicativos de negociação algorítmica colocados o mais próximo possível do local de execução comercial. Um desafio com estas "caixa preta" Os motores comerciais são que eles compõem o aumento de volume ao emitir ordens apenas para cancelá-los e enviá-los novamente. A causa desse comportamento é a falta de visibilidade em que local oferece melhor execução. O comerciante humano é agora um "engenheiro financeiro", & quot; um "quot" (analista quantitativo) com habilidades de programação, que pode ajustar modelos de negociação sobre a marcha. As empresas desenvolvem novos instrumentos financeiros, como derivados do tempo ou transações de classe de ativos cruzados, e precisam implementar os novos aplicativos de forma rápida e escalável.
A longo prazo, a diferenciação competitiva deve ser feita a partir da análise, não apenas do conhecimento. Os comerciantes de estrelas de amanhã assumem riscos, conseguem uma verdadeira visão do cliente e sempre vencem o mercado (fonte IBM: www-935.ibm/services/us/imc/pdf/ge510-6270-trader. pdf).
A resiliência empresarial tem sido uma das principais preocupações das empresas comerciais desde 11 de setembro de 2001. As soluções nesta área variam de centros de dados redundantes situados em diferentes regiões geográficas e conectados a vários locais de negociação para soluções de comerciantes virtuais que oferecem aos comerciantes de energia a maior parte da funcionalidade de um piso comercial em um local remoto.
The financial services industry is one of the most demanding in terms of IT requirements. The industry is experiencing an architectural shift towards Services-Oriented Architecture (SOA), Web services, and virtualization of IT resources. SOA takes advantage of the increase in network speed to enable dynamic binding and virtualization of software components. This allows the creation of new applications without losing the investment in existing systems and infrastructure. The concept has the potential to revolutionize the way integration is done, enabling significant reductions in the complexity and cost of such integration (gigaspaces/download/MerrilLynchGigaSpacesWP. pdf).
Another trend is the consolidation of servers into data center server farms, while trader desks have only KVM extensions and ultra-thin clients (e. g., SunRay and HP blade solutions). High-speed Metro Area Networks enable market data to be multicast between different locations, enabling the virtualization of the trading floor.
High-Level Architecture.
Figure 1 depicts the high-level architecture of a trading environment. The ticker plant and the algorithmic trading engines are located in the high performance trading cluster in the firm's data center or at the exchange. The human traders are located in the end-user applications area.
Functionally there are two application components in the enterprise trading environment, publishers and subscribers. The messaging bus provides the communication path between publishers and subscribers.
There are two types of traffic specific to a trading environment:
• Market Data—Carries pricing information for financial instruments, news, and other value-added information such as analytics. It is unidirectional and very latency sensitive, typically delivered over UDP multicast. It is measured in updates/sec. and in Mbps. Market data flows from one or multiple external feeds, coming from market data providers like stock exchanges, data aggregators, and ECNs. Each provider has their own market data format. The data is received by feed handlers, specialized applications which normalize and clean the data and then send it to data consumers, such as pricing engines, algorithmic trading applications, or human traders. Sell-side firms also send the market data to their clients, buy-side firms such as mutual funds, hedge funds, and other asset managers. Some buy-side firms may opt to receive direct feeds from exchanges, reducing latency.
Figure 1 Trading Architecture for a Buy Side/Sell Side Firm.
There is no industry standard for market data formats. Each exchange has their proprietary format. Financial content providers such as Reuters and Bloomberg aggregate different sources of market data, normalize it, and add news or analytics. Examples of consolidated feeds are RDF (Reuters Data Feed), RWF (Reuters Wire Format), and Bloomberg Professional Services Data.
To deliver lower latency market data, both vendors have released real-time market data feeds which are less processed and have less analytics:
– Bloomberg B-Pipe—With B-Pipe, Bloomberg de-couples their market data feed from their distribution platform because a Bloomberg terminal is not required for get B-Pipe. Wombat and Reuters Feed Handlers have announced support for B-Pipe.
A firm may decide to receive feeds directly from an exchange to reduce latency. The gains in transmission speed can be between 150 milliseconds to 500 milliseconds. These feeds are more complex and more expensive and the firm has to build and maintain their own ticker plant (financetech/featured/showArticle. jhtml? articleID=60404306).
• Trading Orders—This type of traffic carries the actual trades. It is bi-directional and very latency sensitive. It is measured in messages/sec. and Mbps. The orders originate from a buy side or sell side firm and are sent to trading venues like an Exchange or ECN for execution. The most common format for order transport is FIX (Financial Information eXchange—fixprotocol/). The applications which handle FIX messages are called FIX engines and they interface with order management systems (OMS).
An optimization to FIX is called FAST (Fix Adapted for Streaming), which uses a compression schema to reduce message length and, in effect, reduce latency. FAST is targeted more to the delivery of market data and has the potential to become a standard. FAST can also be used as a compression schema for proprietary market data formats.
To reduce latency, firms may opt to establish Direct Market Access (DMA).
DMA is the automated process of routing a securities order directly to an execution venue, therefore avoiding the intervention by a third-party (towergroup/research/content/glossary. jsp? page=1&glossaryId=383). DMA requires a direct connection to the execution venue.
The messaging bus is middleware software from vendors such as Tibco, 29West, Reuters RMDS, or an open source platform such as AMQP. The messaging bus uses a reliable mechanism to deliver messages. The transport can be done over TCP/IP (TibcoEMS, 29West, RMDS, and AMQP) or UDP/multicast (TibcoRV, 29West, and RMDS). One important concept in message distribution is the "topic stream," which is a subset of market data defined by criteria such as ticker symbol, industry, or a certain basket of financial instruments. Subscribers join topic groups mapped to one or multiple sub-topics in order to receive only the relevant information. In the past, all traders received all market data. At the current volumes of traffic, this would be sub-optimal.
The network plays a critical role in the trading environment. Market data is carried to the trading floor where the human traders are located via a Campus or Metro Area high-speed network. High availability and low latency, as well as high throughput, are the most important metrics.
The high performance trading environment has most of its components in the Data Center server farm. To minimize latency, the algorithmic trading engines need to be located in the proximity of the feed handlers, FIX engines, and order management systems. An alternate deployment model has the algorithmic trading systems located at an exchange or a service provider with fast connectivity to multiple exchanges.
Deployment Models.
There are two deployment models for a high performance trading platform. Firms may chose to have a mix of the two:
• Data Center of the trading firm (Figure 2)—This is the traditional model, where a full-fledged trading platform is developed and maintained by the firm with communication links to all the trading venues. Latency varies with the speed of the links and the number of hops between the firm and the venues.
Figure 2 Traditional Deployment Model.
• Co-location at the trading venue (exchanges, financial service providers (FSP)) (Figure 3)
The trading firm deploys its automated trading platform as close as possible to the execution venues to minimize latency.
Figure 3 Hosted Deployment Model.
Services-Oriented Trading Architecture.
We are proposing a services-oriented framework for building the next-generation trading architecture. This approach provides a conceptual framework and an implementation path based on modularization and minimization of inter-dependencies.
This framework provides firms with a methodology to:
• Evaluate their current state in terms of services.
• Prioritize services based on their value to the business.
• Evolve the trading platform to the desired state using a modular approach.
The high performance trading architecture relies on the following services, as defined by the services architecture framework represented in Figure 4.
Figure 4 Service Architecture Framework for High Performance Trading.
Table 1 Service Descriptions and Technologies.
Ultra-low latency messaging.
Instrumentation—appliances, software agents, and router modules.
OS and I/O virtualization, Remote Direct Memory Access (RDMA), TCP Offload Engines (TOE)
Middleware which parallelizes application processing.
Middleware which speeds-up data access for applications, e. g., in-memory caching.
Hardware-assisted multicast replication through-out the network; multicast Layer 2 and Layer 3 optimizations.
Virtualization of storage hardware (VSANs), data replication, remote backup, and file virtualization.
Trading resilience and mobility.
Local and site load balancing and high availability campus networks.
Wide Area application services.
Acceleration of applications over a WAN connection for traders residing off-campus.
Thin client service.
De-coupling of the computing resources from the end-user facing terminals.
Ultra-Low Latency Messaging Service.
This service is provided by the messaging bus, which is a software system that solves the problem of connecting many-to-many applications. The system consists of:
• A set of pre-defined message schemas.
• A set of common command messages.
• A shared application infrastructure for sending the messages to recipients. The shared infrastructure can be based on a message broker or on a publish/subscribe model.
The key requirements for the next-generation messaging bus are (source 29West):
• Lowest possible latency (e. g., less than 100 microseconds)
• Stability under heavy load (e. g., more than 1.4 million msg/sec.)
• Control and flexibility (rate control and configurable transports)
There are efforts in the industry to standardize the messaging bus. Advanced Message Queueing Protocol (AMQP) is an example of an open standard championed by J. P. Morgan Chase and supported by a group of vendors such as Cisco, Envoy Technologies, Red Hat, TWIST Process Innovations, Iona, 29West, and iMatix. Two of the main goals are to provide a more simple path to inter-operability for applications written on different platforms and modularity so that the middleware can be easily evolved.
In very general terms, an AMQP server is analogous to an E-mail server with each exchange acting as a message transfer agent and each message queue as a mailbox. The bindings define the routing tables in each transfer agent. Publishers send messages to individual transfer agents, which then route the messages into mailboxes. Consumers take messages from mailboxes, which creates a powerful and flexible model that is simple (source: amqp/tikiwiki/tiki-index. php? page=OpenApproach#Why_AMQP_).
Latency Monitoring Service.
The main requirements for this service are:
• Sub-millisecond granularity of measurements.
• Near-real time visibility without adding latency to the trading traffic.
• Ability to differentiate application processing latency from network transit latency.
• Ability to handle high message rates.
• Provide a programmatic interface for trading applications to receive latency data, thus enabling algorithmic trading engines to adapt to changing conditions.
• Correlate network events with application events for troubleshooting purposes.
Latency can be defined as the time interval between when a trade order is sent and when the same order is acknowledged and acted upon by the receiving party.
Addressing the latency issue is a complex problem, requiring a holistic approach that identifies all sources of latency and applies different technologies at different layers of the system.
Figure 5 depicts the variety of components that can introduce latency at each layer of the OSI stack. It also maps each source of latency with a possible solution and a monitoring solution. This layered approach can give firms a more structured way of attacking the latency issue, whereby each component can be thought of as a service and treated consistently across the firm.
Maintaining an accurate measure of the dynamic state of this time interval across alternative routes and destinations can be of great assistance in tactical trading decisions. The ability to identify the exact location of delays, whether in the customer's edge network, the central processing hub, or the transaction application level, significantly determines the ability of service providers to meet their trading service-level agreements (SLAs). For buy-side and sell-side forms, as well as for market-data syndicators, the quick identification and removal of bottlenecks translates directly into enhanced trade opportunities and revenue.
Figure 5 Latency Management Architecture.
Cisco Low-Latency Monitoring Tools.
Traditional network monitoring tools operate with minutes or seconds granularity. Next-generation trading platforms, especially those supporting algorithmic trading, require latencies less than 5 ms and extremely low levels of packet loss. On a Gigabit LAN, a 100 ms microburst can cause 10,000 transactions to be lost or excessively delayed.
Cisco offers its customers a choice of tools to measure latency in a trading environment:
• Bandwidth Quality Manager (BQM) (OEM from Corvil)
• Cisco AON-based Financial Services Latency Monitoring Solution (FSMS)
Bandwidth Quality Manager.
Bandwidth Quality Manager (BQM) 4.0 is a next-generation network application performance management product that enables customers to monitor and provision their network for controlled levels of latency and loss performance. While BQM is not exclusively targeted at trading networks, its microsecond visibility combined with intelligent bandwidth provisioning features make it ideal for these demanding environments.
Cisco BQM 4.0 implements a broad set of patented and patent-pending traffic measurement and network analysis technologies that give the user unprecedented visibility and understanding of how to optimize the network for maximum application performance.
Cisco BQM is now supported on the product family of Cisco Application Deployment Engine (ADE). The Cisco ADE product family is the platform of choice for Cisco network management applications.
BQM Benefits.
Cisco BQM micro-visibility is the ability to detect, measure, and analyze latency, jitter, and loss inducing traffic events down to microsecond levels of granularity with per packet resolution. This enables Cisco BQM to detect and determine the impact of traffic events on network latency, jitter, and loss. Critical for trading environments is that BQM can support latency, loss, and jitter measurements one-way for both TCP and UDP (multicast) traffic. This means it reports seamlessly for both trading traffic and market data feeds.
BQM allows the user to specify a comprehensive set of thresholds (against microburst activity, latency, loss, jitter, utilization, etc.) on all interfaces. BQM then operates a background rolling packet capture. Whenever a threshold violation or other potential performance degradation event occurs, it triggers Cisco BQM to store the packet capture to disk for later analysis. This allows the user to examine in full detail both the application traffic that was affected by performance degradation ("the victims") and the traffic that caused the performance degradation ("the culprits"). This can significantly reduce the time spent diagnosing and resolving network performance issues.
BQM is also able to provide detailed bandwidth and quality of service (QoS) policy provisioning recommendations, which the user can directly apply to achieve desired network performance.
BQM Measurements Illustrated.
To understand the difference between some of the more conventional measurement techniques and the visibility provided by BQM, we can look at some comparison graphs. In the first set of graphs (Figure 6 and Figure 7), we see the difference between the latency measured by BQM's Passive Network Quality Monitor (PNQM) and the latency measured by injecting ping packets every 1 second into the traffic stream.
In Figure 6, we see the latency reported by 1-second ICMP ping packets for real network traffic (it is divided by 2 to give an estimate for the one-way delay). It shows the delay comfortably below about 5ms for almost all of the time.
Figure 6 Latency Reported by 1-Second ICMP Ping Packets for Real Network Traffic.
In Figure 7, we see the latency reported by PNQM for the same traffic at the same time. Here we see that by measuring the one-way latency of the actual application packets, we get a radically different picture. Here the latency is seen to be hovering around 20 ms, with occasional bursts far higher. The explanation is that because ping is sending packets only every second, it is completely missing most of the application traffic latency. In fact, ping results typically only indicate round trip propagation delay rather than realistic application latency across the network.
Figure 7 Latency Reported by PNQM for Real Network Traffic.
In the second example (Figure 8), we see the difference in reported link load or saturation levels between a 5-minute average view and a 5 ms microburst view (BQM can report on microbursts down to about 10-100 nanosecond accuracy). The green line shows the average utilization at 5-minute averages to be low, maybe up to 5 Mbits/s. The dark blue plot shows the 5ms microburst activity reaching between 75 Mbits/s and 100 Mbits/s, the LAN speed effectively. BQM shows this level of granularity for all applications and it also gives clear provisioning rules to enable the user to control or neutralize these microbursts.
Figure 8 Difference in Reported Link Load Between a 5-Minute Average View and a 5 ms Microburst View.
BQM Deployment in the Trading Network.
Figure 9 shows a typical BQM deployment in a trading network.
Figure 9 Typical BQM Deployment in a Trading Network.
BQM can then be used to answer these types of questions:
• Are any of my Gigabit LAN core links saturated for more than X milliseconds? Is this causing loss? Which links would most benefit from an upgrade to Etherchannel or 10 Gigabit speeds?
• What application traffic is causing the saturation of my 1 Gigabit links?
• Is any of the market data experiencing end-to-end loss?
• How much additional latency does the failover data center experience? Is this link sized correctly to deal with microbursts?
• Are my traders getting low latency updates from the market data distribution layer? Are they seeing any delays greater than X milliseconds?
Being able to answer these questions simply and effectively saves time and money in running the trading network.
BQM is an essential tool for gaining visibility in market data and trading environments. It provides granular end-to-end latency measurements in complex infrastructures that experience high-volume data movement. Effectively detecting microbursts in sub-millisecond levels and receiving expert analysis on a particular event is invaluable to trading floor architects. Smart bandwidth provisioning recommendations, such as sizing and what-if analysis, provide greater agility to respond to volatile market conditions. As the explosion of algorithmic trading and increasing message rates continues, BQM, combined with its QoS tool, provides the capability of implementing QoS policies that can protect critical trading applications.
Cisco Financial Services Latency Monitoring Solution.
Cisco and Trading Metrics have collaborated on latency monitoring solutions for FIX order flow and market data monitoring. Cisco AON technology is the foundation for a new class of network-embedded products and solutions that help merge intelligent networks with application infrastructure, based on either service-oriented or traditional architectures. Trading Metrics is a leading provider of analytics software for network infrastructure and application latency monitoring purposes (tradingmetrics/).
The Cisco AON Financial Services Latency Monitoring Solution (FSMS) correlated two kinds of events at the point of observation:
• Network events correlated directly with coincident application message handling.
• Trade order flow and matching market update events.
Using time stamps asserted at the point of capture in the network, real-time analysis of these correlated data streams permits precise identification of bottlenecks across the infrastructure while a trade is being executed or market data is being distributed. By monitoring and measuring latency early in the cycle, financial companies can make better decisions about which network service—and which intermediary, market, or counterparty—to select for routing trade orders. Likewise, this knowledge allows more streamlined access to updated market data (stock quotes, economic news, etc.), which is an important basis for initiating, withdrawing from, or pursuing market opportunities.
The components of the solution are:
• AON hardware in three form factors:
– AON Network Module for Cisco 2600/2800/3700/3800 routers.
– AON Blade for the Cisco Catalyst 6500 series.
– AON 8340 Appliance.
• Trading Metrics M&A 2.0 software, which provides the monitoring and alerting application, displays latency graphs on a dashboard, and issues alerts when slowdowns occur (tradingmetrics/TM_brochure. pdf).
Figure 10 AON-Based FIX Latency Monitoring.
Cisco IP SLA.
Cisco IP SLA is an embedded network management tool in Cisco IOS which allows routers and switches to generate synthetic traffic streams which can be measured for latency, jitter, packet loss, and other criteria (cisco/go/ipsla).
Two key concepts are the source of the generated traffic and the target. Both of these run an IP SLA "responder," which has the responsibility to timestamp the control traffic before it is sourced and returned by the target (for a round trip measurement). Various traffic types can be sourced within IP SLA and they are aimed at different metrics and target different services and applications. The UDP jitter operation is used to measure one-way and round-trip delay and report variations. As the traffic is time stamped on both sending and target devices using the responder capability, the round trip delay is characterized as the delta between the two timestamps.
A new feature was introduced in IOS 12.3(14)T, IP SLA Sub Millisecond Reporting, which allows for timestamps to be displayed with a resolution in microseconds, thus providing a level of granularity not previously available. This new feature has now made IP SLA relevant to campus networks where network latency is typically in the range of 300-800 microseconds and the ability to detect trends and spikes (brief trends) based on microsecond granularity counters is a requirement for customers engaged in time-sensitive electronic trading environments.
As a result, IP SLA is now being considered by significant numbers of financial organizations as they are all faced with requirements to:
• Report baseline latency to their users.
• Trend baseline latency over time.
• Respond quickly to traffic bursts that cause changes in the reported latency.
Sub-millisecond reporting is necessary for these customers, since many campus and backbones are currently delivering under a second of latency across several switch hops. Electronic trading environments have generally worked to eliminate or minimize all areas of device and network latency to deliver rapid order fulfillment to the business. Reporting that network response times are "just under one millisecond" is no longer sufficient; the granularity of latency measurements reported across a network segment or backbone need to be closer to 300-800 micro-seconds with a degree of resolution of 100 ì segundos.
IP SLA recently added support for IP multicast test streams, which can measure market data latency.
A typical network topology is shown in Figure 11 with the IP SLA shadow routers, sources, and responders.
Figure 11 IP SLA Deployment.
Computing Services.
Computing services cover a wide range of technologies with the goal of eliminating memory and CPU bottlenecks created by the processing of network packets. Trading applications consume high volumes of market data and the servers have to dedicate resources to processing network traffic instead of application processing.
• Transport processing—At high speeds, network packet processing can consume a significant amount of server CPU cycles and memory. An established rule of thumb states that 1Gbps of network bandwidth requires 1 GHz of processor capacity (source Intel white paper on I/O acceleration intel/technology/ioacceleration/306517.pdf).
• Intermediate buffer copying—In a conventional network stack implementation, data needs to be copied by the CPU between network buffers and application buffers. This overhead is worsened by the fact that memory speeds have not kept up with increases in CPU speeds. For example, processors like the Intel Xeon are approaching 4 GHz, while RAM chips hover around 400MHz (for DDR 3200 memory) (source Intel intel/technology/ioacceleration/306517.pdf).
• Context switching—Every time an individual packet needs to be processed, the CPU performs a context switch from application context to network traffic context. This overhead could be reduced if the switch would occur only when the whole application buffer is complete.
Figure 12 Sources of Overhead in Data Center Servers.
• TCP Offload Engine (TOE)—Offloads transport processor cycles to the NIC. Moves TCP/IP protocol stack buffer copies from system memory to NIC memory.
• Remote Direct Memory Access (RDMA)—Enables a network adapter to transfer data directly from application to application without involving the operating system. Eliminates intermediate and application buffer copies (memory bandwidth consumption).
• Kernel bypass — Direct user-level access to hardware. Dramatically reduces application context switches.
Figure 13 RDMA and Kernel Bypass.
InfiniBand is a point-to-point (switched fabric) bidirectional serial communication link which implements RDMA, among other features. Cisco offers an InfiniBand switch, the Server Fabric Switch (SFS): cisco/application/pdf/en/us/guest/netsol/ns500/c643/cdccont_0900aecd804c35cb. pdf.
Figure 14 Typical SFS Deployment.
Trading applications benefit from the reduction in latency and latency variability, as proved by a test performed with the Cisco SFS and Wombat Feed Handlers by Stac Research:
Application Virtualization Service.
De-coupling the application from the underlying OS and server hardware enables them to run as network services. One application can be run in parallel on multiple servers, or multiple applications can be run on the same server, as the best resource allocation dictates. This decoupling enables better load balancing and disaster recovery for business continuance strategies. The process of re-allocating computing resources to an application is dynamic. Using an application virtualization system like Data Synapse's GridServer, applications can migrate, using pre-configured policies, to under-utilized servers in a supply-matches-demand process (networkworld/supp/2005/ndc1/022105virtual. html? page=2).
There are many business advantages for financial firms who adopt application virtualization:
• Faster time to market for new products and services.
• Faster integration of firms following merger and acquisition activity.
• Increased application availability.
• Better workload distribution, which creates more "head room" for processing spikes in trading volume.
• Operational efficiency and control.
• Reduction in IT complexity.
Currently, application virtualization is not used in the trading front-office. One use-case is risk modeling, like Monte Carlo simulations. As the technology evolves, it is conceivable that some the trading platforms will adopt it.
Data Virtualization Service.
To effectively share resources across distributed enterprise applications, firms must be able to leverage data across multiple sources in real-time while ensuring data integrity. With solutions from data virtualization software vendors such as Gemstone or Tangosol (now Oracle), financial firms can access heterogeneous sources of data as a single system image that enables connectivity between business processes and unrestrained application access to distributed caching. The net result is that all users have instant access to these data resources across a distributed network (gridtoday/03/0210/101061.html).
This is called a data grid and is the first step in the process of creating what Gartner calls Extreme Transaction Processing (XTP) (gartner/DisplayDocument? ref=g_search&id=500947). Technologies such as data and applications virtualization enable financial firms to perform real-time complex analytics, event-driven applications, and dynamic resource allocation.
One example of data virtualization in action is a global order book application. An order book is the repository of active orders that is published by the exchange or other market makers. A global order book aggregates orders from around the world from markets that operate independently. O maior desafio para a aplicação é a escalabilidade em relação à conectividade WAN porque tem que manter o estado. Today's data grids are localized in data centers connected by Metro Area Networks (MAN). This is mainly because the applications themselves have limits—they have been developed without the WAN in mind.
Figure 15 GemStone GemFire Distributed Caching.
Before data virtualization, applications used database clustering for failover and scalability. This solution is limited by the performance of the underlying database. Failover is slower because the data is committed to disc. With data grids, the data which is part of the active state is cached in memory, which reduces drastically the failover time. Scaling the data grid means just adding more distributed resources, providing a more deterministic performance compared to a database cluster.
Multicast Service.
Market data delivery is a perfect example of an application that needs to deliver the same data stream to hundreds and potentially thousands of end users. Market data services have been implemented with TCP or UDP broadcast as the network layer, but those implementations have limited scalability. Using TCP requires a separate socket and sliding window on the server for each recipient. UDP broadcast requires a separate copy of the stream for each destination subnet. Both of these methods exhaust the resources of the servers and the network. The server side must transmit and service each of the streams individually, which requires larger and larger server farms. On the network side, the required bandwidth for the application increases in a linear fashion. For example, to send a 1 Mbps stream to 1000recipients using TCP requires 1 Gbps of bandwidth.
IP multicast is the only way to scale market data delivery. To deliver a 1 Mbps stream to 1000 recipients, IP multicast would require 1 Mbps. The stream can be delivered by as few as two servers—one primary and one backup for redundancy.
There are two main phases of market data delivery to the end user. In the first phase, the data stream must be brought from the exchange into the brokerage's network. Typically the feeds are terminated in a data center on the customer premise. The feeds are then processed by a feed handler, which may normalize the data stream into a common format and then republish into the application messaging servers in the data center.
The second phase involves injecting the data stream into the application messaging bus which feeds the core infrastructure of the trading applications. The large brokerage houses have thousands of applications that use the market data streams for various purposes, such as live trades, long term trending, arbitrage, etc. Many of these applications listen to the feeds and then republish their own analytical and derivative information. For example, a brokerage may compare the prices of CSCO to the option prices of CSCO on another exchange and then publish ratings which a different application may monitor to determine how much they are out of synchronization.
Figure 16 Market Data Distribution Players.
The delivery of these data streams is typically over a reliable multicast transport protocol, traditionally Tibco Rendezvous. Tibco RV operates in a publish and subscribe environment. Each financial instrument is given a subject name, such as CSCO. last. Each application server can request the individual instruments of interest by their subject name and receive just a that subset of the information. This is called subject-based forwarding or filtering. Subject-based filtering is patented by Tibco.
A distinction should be made between the first and second phases of market data delivery. The delivery of market data from the exchange to the brokerage is mostly a one-to-many application. The only exception to the unidirectional nature of market data may be retransmission requests, which are usually sent using unicast. The trading applications, however, are definitely many-to-many applications and may interact with the exchanges to place orders.
Figure 17 Market Data Architecture.
Design Issues.
Number of Groups/Channels to Use.
Many application developers consider using thousand of multicast groups to give them the ability to divide up products or instruments into small buckets. Normally these applications send many small messages as part of their information bus. Usually several messages are sent in each packet that are received by many users. Sending fewer messages in each packet increases the overhead necessary for each message.
In the extreme case, sending only one message in each packet quickly reaches the point of diminishing returns—there is more overhead sent than actual data. Application developers must find a reasonable compromise between the number of groups and breaking up their products into logical buckets.
Consider, for example, the Nasdaq Quotation Dissemination Service (NQDS). The instruments are broken up alphabetically:
Another example is the Nasdaq Totalview service, broken up this way:
This approach allows for straight forward network/application management, but does not necessarily allow for optimized bandwidth utilization for most users. A user of NQDS that is interested in technology stocks, and would like to subscribe to just CSCO and INTL, would have to pull down all the data for the first two groups of NQDS. Understanding the way users pull down the data and then organize it into appropriate logical groups optimizes the bandwidth for each user.
In many market data applications, optimizing the data organization would be of limited value. Typically customers bring in all data into a few machines and filter the instruments. Using more groups is just more overhead for the stack and does not help the customers conserve bandwidth. Another approach might be to keep the groups down to a minimum level and use UDP port numbers to further differentiate if necessary. The other extreme would be to use just one multicast group for the entire application and then have the end user filter the data. In some situations this may be sufficient.
Intermittent Sources.
A common issue with market data applications are servers that send data to a multicast group and then go silent for more than 3.5 minutes. These intermittent sources may cause trashing of state on the network and can introduce packet loss during the window of time when soft state and then hardware shorts are being created.
PIM-Bidir or PIM-SSM.
The first and best solution for intermittent sources is to use PIM-Bidir for many-to-many applications and PIM-SSM for one-to-many applications.
Both of these optimizations of the PIM protocol do not have any data-driven events in creating forwarding state. That means that as long as the receivers are subscribed to the streams, the network has the forwarding state created in the hardware switching path.
Intermittent sources are not an issue with PIM-Bidir and PIM-SSM.
Null Packets.
In PIM-SM environments a common method to make sure forwarding state is created is to send a burst of null packets to the multicast group before the actual data stream. The application must efficiently ignore these null data packets to ensure it does not affect performance. The sources must only send the burst of packets if they have been silent for more than 3 minutes. A good practice is to send the burst if the source is silent for more than a minute. Many financials send out an initial burst of traffic in the morning and then all well-behaved sources do not have problems.
Periodic Keepalives or Heartbeats.
An alternative approach for PIM-SM environments is for sources to send periodic heartbeat messages to the multicast groups. This is a similar approach to the null packets, but the packets can be sent on a regular timer so that the forwarding state never expires.
S, G Expiry Timer.
Finally, Cisco has made a modification to the operation of the S, G expiry timer in IOS. There is now a CLI knob to allow the state for a S, G to stay alive for hours without any traffic being sent. The (S, G) expiry timer is configurable. This approach should be considered a workaround until PIM-Bidir or PIM-SSM is deployed or the application is fixed.
RTCP Feedback.
A common issue with real time voice and video applications that use RTP is the use of RTCP feedback traffic. Unnecessary use of the feedback option can create excessive multicast state in the network. If the RTCP traffic is not required by the application it should be avoided.
Fast Producers and Slow Consumers.
Today many servers providing market data are attached at Gigabit speeds, while the receivers are attached at different speeds, usually 100Mbps. This creates the potential for receivers to drop packets and request re-transmissions, which creates more traffic that the slowest consumers cannot handle, continuing the vicious circle.
The solution needs to be some type of access control in the application that limits the amount of data that one host can request. QoS and other network functions can mitigate the problem, but ultimately the subscriptions need to be managed in the application.
Tibco Heartbeats.
TibcoRV has had the ability to use IP multicast for the heartbeat between the TICs for many years. However, there are some brokerage houses that are still using very old versions of TibcoRV that use UDP broadcast support for the resiliency. This limitation is often cited as a reason to maintain a Layer 2 infrastructure between TICs located in different data centers. These older versions of TibcoRV should be phased out in favor of the IP multicast supported versions.
Multicast Forwarding Options.
PIM Sparse Mode.
The standard IP multicast forwarding protocol used today for market data delivery is PIM Sparse Mode. It is supported on all Cisco routers and switches and is well understood. PIM-SM can be used in all the network components from the exchange, FSP, and brokerage.
There are, however, some long-standing issues and unnecessary complexity associated with a PIM-SM deployment that could be avoided by using PIM-Bidir and PIM-SSM. These are covered in the next sections.
The main components of the PIM-SM implementation are:
• PIM Sparse Mode v2.
• Shared Tree (spt-threshold infinity)
A design option in the brokerage or in the exchange.
Details of Anycast RP can be found in:
The classic high availability design for Tibco in the brokerage network is documented in:
Bidirectional PIM.
PIM-Bidir is an optimization of PIM Sparse Mode for many-to-many applications. It has several key advantages over a PIM-SM deployment:
• Better support for intermittent sources.
• No data-triggered events.
One of the weaknesses of PIM-SM is that the network continually needs to react to active data flows. This can cause non-deterministic behavior that may be hard to troubleshoot. PIM-Bidir has the following major protocol differences over PIM-SM:
– No source registration.
Source traffic is automatically sent to the RP and then down to the interested receivers. There is no unicast encapsulation, PIM joins from the RP to the first hop router and then registration stop messages.
All PIM-Bidir traffic is forwarded on a *,G forwarding entry. The router does not have to monitor the traffic flow on a *,G and then send joins when the traffic passes a threshold.
– No need for an actual RP.
The RP does not have an actual protocol function in PIM-Bidir. The RP acts as a routing vector in which all the traffic converges. The RP can be configured as an address that is not assigned to any particular device. This is called a Phantom RP.
– No need for MSDP.
MSDP provides source information between RPs in a PIM-SM network. PIM-Bidir does not use the active source information for any forwarding decisions and therefore MSDP is not required.
Bidirectional PIM is ideally suited for the brokerage network in the data center of the exchange. Neste ambiente, existem muitas fontes que enviam para um conjunto relativamente pequeno de grupos em um padrão de trânsito muitos-para-muitos.
The key components of the PIM-Bidir implementation are:
Further details about Phantom RP and basic PIM-Bidir design are documented in:
Source Specific Multicast.
PIM-SSM is an optimization of PIM Sparse Mode for one-to-many applications. In certain environments it can offer several distinct advantages over PIM-SM. Like PIM-Bidir, PIM-SSM does not rely on any data-triggered events. Furthermore, PIM-SSM does not require an RP at all—there is no such concept in PIM-SSM. The forwarding information in the network is completely controlled by the interest of the receivers.
Source Specific Multicast is ideally suited for market data delivery in the financial service provider. The FSP can receive the feeds from the exchanges and then route them to the end of their network.
Many FSPs are also implementing MPLS and Multicast VPNs in their core. PIM-SSM is the preferred method for transporting traffic in VRFs.
When PIM-SSM is deployed all the way to the end user, the receiver indicates his interest in a particular S, G with IGMPv3. Even though IGMPv3 was defined by RFC 2236 back in October, 2002, it still has not been implemented by all edge devices. This creates a challenge for deploying an end-to-end PIM-SSM service. A transitional solution has been developed by Cisco to enable an edge device that supports IGMPv2 to participate in an PIM-SSM service. This feature is called SSM Mapping and is documented in:
Storage Services.
The service provides storage capabilities into the market data and trading environments. Trading applications access backend storage to connect to different databases and other repositories consisting of portfolios, trade settlements, compliance data, management applications, Enterprise Service Bus (ESB), and other critical applications where reliability and security is critical to the success of the business. The main requirements for the service are:
Storage virtualization is an enabling technology that simplifies management of complex infrastructures, enables non-disruptive operations, and facilitates critical elements of a proactive information lifecycle management (ILM) strategy. EMC Invista running on the Cisco MDS 9000 enables heterogeneous storage pooling and dynamic storage provisioning, allowing allocation of any storage to any application. High availability is increased with seamless data migration. Appropriate class of storage is allocated to point-in-time copies (clones). Storage virtualization is also leveraged through the use of Virtual Storage Area Networks (VSANs), which enable the consolidation of multiple isolated SANs onto a single physical SAN infrastructure, while still partitioning them as completely separate logical entities. VSANs provide all the security and fabric services of traditional SANs, yet give organizations the flexibility to easily move resources from one VSAN to another. This results in increased disk and network utilization while driving down the cost of management. Integrated Inter VSAN Routing (IVR) enables sharing of common resources across VSANs.
Figure 18 High Performance Computing Storage.
Replication of data to a secondary and tertiary data center is crucial for business continuance. Replication offsite over Fiber Channel over IP (FCIP) coupled with write acceleration and tape acceleration provides improved performance over long distance. Continuous Data Replication (CDP) is another mechanism which is gaining popularity in the industry. It refers to backup of computer data by automatically saving a copy of every change made to that data, essentially capturing every version of the data that the user saves. It allows the user or administrator to restore data to any point in time. Solutions from EMC and Incipient utilize the SANTap protocol on the Storage Services Module (SSM) in the MDS platform to provide CDP functionality. The SSM uses the SANTap service to intercept and redirect a copy of a write between a given initiator and target. The appliance does not reside in the data path—it is completely passive. The CDP solutions typically leverage a history journal that tracks all changes and bookmarks that identify application-specific events. This ensures that data at any point in time is fully self-consistent and is recoverable instantly in the event of a site failure.
Backup procedure reliability and performance are extremely important when storing critical financial data to a SAN. The use of expensive media servers to move data from disk to tape devices can be cumbersome. Network-accelerated serverless backup (NASB) helps you back up increased amounts of data in shorter backup time frames by shifting the data movement from multiple backup servers to Cisco MDS 9000 Series multilayer switches. This technology decreases impact on application servers because the MDS offloads the application and backup servers. It also reduces the number of backup and media servers required, thus reducing CAPEX and OPEX. The flexibility of the backup environment increases because storage and tape drives can reside anywhere on the SAN.
Trading Resilience and Mobility.
The main requirements for this service are to provide the virtual trader:
• Fully scalable and redundant campus trading environment.
• Resilient server load balancing and high availability in analytic server farms.
• Global site load balancing that provide the capability to continue participating in the market venues of closest proximity.
A highly-available campus environment is capable of sustaining multiple failures (i. e., links, switches, modules, etc.), which provides non-disruptive access to trading systems for traders and market data feeds. Fine-tuned routing protocol timers, in conjunction with mechanisms such as NSF/SSO, provide subsecond recovery from any failure.
The high-speed interconnect between data centers can be DWDM/dark fiber, which provides business continuance in case of a site failure. Each site is 100km-200km apart, allowing synchronous data replication. Usually the distance for synchronous data replication is 100km, but with Read/Write Acceleration it can stretch to 200km. A tertiary data center can be greater than 200km away, which would replicate data in an asynchronous fashion.
Figure 19 Trading Resilience.
A robust server load balancing solution is required for order routing, algorithmic trading, risk analysis, and other services to offer continuous access to clients regardless of a server failure. Multiple servers encompass a "farm" and these hosts can added/removed without disruption since they reside behind a virtual IP (VIP) address which is announced in the network.
A global site load balancing solution provides remote traders the resiliency to access trading environments which are closer to their location. This minimizes latency for execution times since requests are always routed to the nearest venue.
Figure 20 Virtualization of Trading Environment.
A trading environment can be virtualized to provide segmentation and resiliency in complex architectures. Figure 20 illustrates a high-level topology depicting multiple market data feeds entering the environment, whereby each vendor is assigned its own Virtual Routing and Forwarding (VRF) instance. The market data is transferred to a high-speed InfiniBand low-latency compute fabric where feed handlers, order routing systems, and algorithmic trading systems reside. All storage is accessed via a SAN and is also virtualized with VSANs, allowing further security and segmentation. The normalized data from the compute fabric is transferred to the campus trading environment where the trading desks reside.
Wide Area Application Services.
This service provides application acceleration and optimization capabilities for traders who are located outside of the core trading floor facility/data center and working from a remote office. To consolidate servers and increase security in remote offices, file servers, NAS filers, storage arrays, and tape drives are moved to a corporate data center to increase security and regulatory compliance and facilitate centralized storage and archival management. As the traditional trading floor is becoming more virtual, wide area application services technology is being utilized to provide a "LAN-like" experience to remote traders when they access resources at the corporate site. Traders often utilize Microsoft Office applications, especially Excel in addition to Sharepoint and Exchange. Excel is used heavily for modeling and permutations where sometime only small portions of the file are changed. CIFS protocol is notoriously known to be "chatty," where several message normally traverse the WAN for a simple file operation and it is addressed by Wide Area Application Service (WAAS) technology. Bloomberg and Reuters applications are also very popular financial tools which access a centralized SAN or NAS filer to retrieve critical data which is fused together before represented to a trader's screen.
Figure 21 Wide Area Optimization.
A pair of Wide Area Application Engines (WAEs) that reside in the remote office and the data center provide local object caching to increase application performance. The remote office WAEs can be a module in the ISR router or a stand-alone appliance. The data center WAE devices are load balanced behind an Application Control Engine module installed in a pair of Catalyst 6500 series switches at the aggregation layer. The WAE appliance farm is represented by a virtual IP address. The local router in each site utilizes Web Cache Communication Protocol version 2 (WCCP v2) to redirect traffic to the WAE that intercepts the traffic and determines if there is a cache hit or miss. The content is served locally from the engine if it resides in cache; otherwise the request is sent across the WAN the initial time to retrieve the object. This methodology optimizes the trader experience by removing application latency and shielding the individual from any congestion in the WAN.
WAAS uses the following technologies to provide application acceleration:
• Data Redundancy Elimination (DRE) is an advanced form of network compression which allows the WAE to maintain a history of previously-seen TCP message traffic for the purposes of reducing redundancy found in network traffic. This combined with the Lempel-Ziv (LZ) compression algorithm reduces the number of redundant packets that traverse the WAN, which improves application transaction performance and conserves bandwidth.
• Transport Flow Optimization (TFO) employs a robust TCP proxy to safely optimize TCP at the WAE device by applying TCP-compliant optimizations to shield the clients and servers from poor TCP behavior because of WAN conditions. By running a TCP proxy between the devices and leveraging an optimized TCP stack between the devices, many of the problems that occur in the WAN are completely blocked from propagating back to trader desktops. The traders experience LAN-like TCP response times and behavior because the WAE is terminating TCP locally. TFO improves reliability and throughput through increases in TCP window scaling and sizing enhancements in addition to superior congestion management.
Thin Client Service.
This service provides a "thin" advanced trading desktop which delivers significant advantages to demanding trading floor environments requiring continuous growth in compute power. As financial institutions race to provide the best trade executions for their clients, traders are utilizing several simultaneous critical applications that facilitate complex transactions. It is not uncommon to find three or more workstations and monitors at a trader's desk which provide visibility into market liquidity, trading venues, news, analysis of complex portfolio simulations, and other financial tools. In addition, market dynamics continue to evolve with Direct Market Access (DMA), ECNs, alternative trading volumes, and upcoming regulation changes with Regulation National Market System (RegNMS) in the US and Markets in Financial Instruments Directive (MiFID) in Europe. At the same time, business seeks greater control, improved ROI, and additional flexibility, which creates greater demands on trading floor infrastructures.
Traders no longer require multiple workstations at their desk. Thin clients consist of keyboard, mouse, and multi-displays which provide a total trader desktop solution without compromising security. Hewlett Packard, Citrix, Desktone, Wyse, and other vendors provide thin client solutions to capitalize on the virtual desktop paradigm. Thin clients de-couple the user-facing hardware from the processing hardware, thus enabling IT to grow the processing power without changing anything on the end user side. The workstation computing power is stored in the data center on blade workstations, which provide greater scalability, increased data security, improved business continuance across multiple sites, and reduction in OPEX by removing the need to manage individual workstations on the trading floor. One blade workstation can be dedicated to a trader or shared among multiple traders depending on the requirements for computer power.
The "thin client" solution is optimized to work in a campus LAN environment, but can also extend the benefits to traders in remote locations. Latency is always a concern when there is a WAN interconnecting the blade workstation and thin client devices. The network connection needs to be sized accordingly so traffic is not dropped if saturation points exist in the WAN topology. WAN Quality of Service (QoS) should prioritize sensitive traffic. There are some guidelines which should be followed to allow for an optimized user experience. A typical highly-interactive desktop experience requires a client-to-blade round trip latency of <20ms for a 2Kb packet size. There may be a slight lag in display if network latency is between 20ms to 40ms. A typical trader desk with a four multi-display terminal requires 2-3Mbps bandwidth consumption with seamless communication with blade workstation(s) in the data center. Streaming video (800x600 at 24fps/full color) requires 9 Mbps bandwidth usage.
Figure 22 Thin Client Architecture.
Management of a large thin client environment is simplified since a centralized IT staff manages all of the blade workstations dispersed across multiple data centers. A trader is redirected to the most available environment in the enterprise in the event of a particular site failure. High availability is a key concern in critical financial environments and the Blade Workstation design provides rapid provisioning of another blade workstation in the data center. This resiliency provides greater uptime, increases in productivity, and OpEx reduction.
Advanced Encryption Standard.
Advanced Message Queueing Protocol.
Application Oriented Networking.
The Archipelago® Integrated Web book gives investors the unique opportunity to view the entire ArcaEx and ArcaEdge books in addition to books made available by other market participants.
ECN Order Book feed available via NASDAQ.
Chicago Board of Trade.
Class-Based Weighted Fair Queueing.
Continuous Data Replication.
Chicago Mercantile Exchange is engaged in trading of futures contracts and derivatives.
Central Processing Unit.
Distributed Defect Tracking System.
Direct Market Access.
Data Redundancy Elimination.
Dense Wavelength Division Multiplexing.
Electronic Communication Network.
Enterprise Service Bus.
Enterprise Solutions Engineering.
FIX Adapted for Streaming.
Fibre Channel over IP.
Financial Information Exchange.
Financial Services Latency Monitoring Solution.
Financial Service Provider.
Information Lifecycle Management.
Instinet Island Book.
Internetworking Operating System.
Keyboard Video Mouse.
Low Latency Queueing.
Metro Area Network.
Multilayer Director Switch.
Diretoria de Mercados em Instrumentos Financeiros.
Message Passing Interface is an industry standard specifying a library of functions to enable the passing of messages between nodes within a parallel computing environment.
Network Attached Storage.
Network Accelerated Serverless Backup.
Network Interface Card.
Nasdaq Quotation Dissemination Service.
Sistema de gerenciamento de pedidos.
Open Systems Interconnection.
Protocol Independent Multicast.
PIM-Source Specific Multicast.
Qualidade de serviço.
Random Access Memory.
Reuters Data Feed.
Reuters Data Feed Direct.
Remote Direct Memory Access.
Regulation National Market System.
Remote Graphics Software.
Reuters Market Data System.
RTP Control Protocol.
Real Time Protocol.
Reuters Wire Format.
Storage Area Network.
Small Computer System Interface.
Sockets Direct Protocol—Given that many modern applications are written using the sockets API, SDP can intercept the sockets at the kernel level and map these socket calls to an InfiniBand transport service that uses RDMA operations to offload data movement from the CPU to the HCA hardware.
Server Fabric Switch.
Secure Financial Transaction Infrastructure network developed to provide firms with excellent communication paths to NYSE Group, AMEX, Chicago Stock Exchange, NASDAQ, and other exchanges. It is often used for order routing.
Evolution and Practice: Low-latency Distributed Applications in Finance.
The finance industry has unique demands for low-latency distributed systems.
Andrew Brook.
Virtually all systems have some requirements for latency, defined here as the time required for a system to respond to input. (Non-halting computations exist, but they have few practical applications.) Latency requirements appear in problem domains as diverse as aircraft flight controls (copter. ardupilot/), voice communications (queue. acm/detail. cfm? id=1028895), multiplayer gaming (queue. acm/detail. cfm? id=971591), online advertising (acuityads/real-time-bidding/), and scientific experiments (home. web. cern. ch/about/accelerators/cern-neutrinos-gran-sasso).
Distributed systems—in which computation occurs on multiple networked computers that communicate and coordinate their actions by passing messages—present special latency considerations. In recent years the automation of financial trading has driven requirements for distributed systems with challenging latency requirements (often measured in microseconds or even nanoseconds; see table 1) and global geographic distribution. Automated trading provides a window into the engineering challenges of ever-shrinking latency requirements, which may be useful to software engineers in other fields.
This article focuses on applications where latency (as opposed to throughput, efficiency, or some other metric) is one of the primary design considerations. Phrased differently, "low-latency systems" are those for which latency is the main measure of success and is usually the toughest constraint to design around. The article presents examples of low-latency systems that illustrate the external factors that drive latency and then discusses some practical engineering approaches to building systems that operate at low latency.
Why is everyone in such a hurry?
To understand the impact of latency on an application, it's important first to understand the external, real-world factors that drive the requirement. The following examples from the finance industry illustrate the impact of some real-world factors.
Request for Quote Trading.
In 2003 I worked at a large bank that had just deployed a new Web-based institutional foreign-currency trading system. The quote and trade engine, a J2EE (Java 2 Platform, Enterprise Edition) application running in a WebLogic server on top of an Oracle database, had response times that were reliably under two seconds—fast enough to ensure good user experience.
Around the same time that the bank's Web site went live, a multibank online trading platform was launched. On this new platform, a client would submit an RFQ (request for quote) that would be forwarded to multiple participating banks. Each bank would respond with a quote, and the client would choose which one to accept.
My bank initiated a project to connect to the new multibank platform. The reasoning was that since a two-second response time was good enough for a user on the Web site, it should be good enough for the new platform, and so the same quote and trade engine could be reused. Within weeks of going live, however, the bank was winning a surprisingly small percentage of RFQs. The root cause was latency. When two banks responded with the same price (which happened quite often), the first response was displayed at the top of the list. Most clients waited to see a few different quotes and then clicked on the one at the top of the list. The result was that the fastest bank often won the client's business—and my bank wasn't the fastest.
The slowest part of the quote-generation process occurred in the database queries loading customer pricing parameters. Adding a cache to the quote engine and optimizing a few other "hot spots" in the code brought quote latency down to the range of roughly 100 milliseconds. With a faster engine, the bank was able to capture significant market share on the competitive quotation platform—but the market continued to evolve.
Citações de transmissão.
By 2006 a new style of currency trading was becoming popular. Instead of a customer sending a specific request and the bank responding with a quote, customers wanted the banks to send a continuous stream of quotes. This streaming-quotes style of trading was especially popular with certain hedge funds that were developing automated trading strategies—applications that would receive streams of quotes from multiple banks and automatically decide when to trade. In many cases, humans were now out of the loop on both sides of the trade.
To understand this new competitive dynamic, it's important to know how banks compute the rates they charge their clients for foreign-exchange transactions. The largest banks trade currencies with each other in the so-called interbank market. The exchange rates set in that market are the most competitive and form the basis for the rates (plus some markup) that are offered to clients. Every time the interbank rate changes, each bank recomputes and republishes the corresponding client rate quotes. If a client accepts a quote (i. e., requests to trade against a quoted exchange rate), the bank can immediately execute an offsetting trade with the interbank market, minimizing risk and locking in a small profit. There are, however, risks to banks that are slow to update their quotes. A simple example can illustrate:
Imagine that the interbank spot market for EUR/USD has rates of 1.3558 / 1.3560. (The term spot means that the agreed-upon currencies are to be exchanged within two business days. Currencies can be traded for delivery at any mutually agreed-upon date in the future, but the spot market is the most active in terms of number of trades.) Two rates are quoted: one for buying (the bid rate), and one for selling (the offered or ask rate). In this case, a participant in the interbank market could sell one euro and receive 1.3558 US dollars in return. Conversely, one could buy one euro for a price of 1.3560 US dollars.
Say that two banks, A and B, are participants in the interbank market and are publishing quotes to the same hedge fund client, C. Both banks add a margin of 0.0001 to the exchange rates they quote to their clients—so both publish quotes of 1.3557 / 1.3561 to client C. Bank A, however, is faster at updating its quotes than bank B, taking about 50 milliseconds while bank B takes about 250 milliseconds. There are approximately 50 milliseconds of network latency between banks A and B and their mutual client C. Both banks A and B take about 10 milliseconds to acknowledge an order, while the hedge fund C takes about 10 milliseconds to evaluate new quotes and submit orders. Table 2 breaks down the sequence of events.
The net effect of this new streaming-quote style of trading was that any bank that was significantly slower than its rivals was likely to suffer losses when market prices changed and its quotes weren't updated quickly enough. At the same time, those banks that could update their quotes fastest made significant profits. Latency was no longer just a factor in operational efficiency or market share—it directly impacted the profit and loss of the trading desk. As the volume and speed of trading increased throughout the mid-2000s, these profits and losses grew to be quite large. (How low can you go? Table 3 shows some examples of approximate latencies of systems and applications across nine orders of magnitude.)
To improve its latency, my bank split its quote and trading engine into distinct applications and rewrote the quote engine in C++. The small delays added by each hop in the network from the interbank market to the bank and onward to its clients were now significant, so the bank upgraded firewalls and procured dedicated telecom circuits. Network upgrades combined with the faster quote engine brought end-to-end quote latency down below 10 milliseconds for clients who were physically located close to our facilities in New York, London, or Hong Kong. Trading performance and profits rose accordingly—but, of course, the market kept evolving.
Engineering systems for low latency.
The latency requirements of a given application can be addressed in many ways, and each problem requires a different solution. There are some common themes, though. First, it is usually necessary to measure latency before it can be improved. Second, optimization often requires looking below abstraction layers and adapting to the reality of the physical infrastructure. Finally, it is sometimes possible to restructure the algorithms (or even the problem definition itself) to achieve low latency.
Lies, damn lies, and statistics.
The first step to solving most optimization problems (not just those that involve software) is to measure the current system's performance. Start from the highest level and measure the end-to-end latency. Then measure the latency of each component or processing stage. If any stage is taking an unusually large portion of the latency, then break it down further and measure the latency of its substages. The goal is to find the parts of the system that contribute the most to the total latency and focus optimization efforts there. This is not always straightforward in practice, however.
For example, imagine an application that responds to customer quote requests received over a network. The client sends 100 quote requests in quick succession (the next request is sent as soon as the prior response is received) and reports total elapsed time of 360 milliseconds—or 3.6 milliseconds on average to service a request. The internals of the application are broken down and measured using the same 100-quote test set:
&touro; Read input message from network and parse - 5 microseconds.
&touro; Look up client profile - 3.2 milliseconds (3,200 microseconds)
&touro; Compute client quote - 15 microseconds.
&touro; Log quote - 20 microseconds.
&touro; Serialize quote to a response message - 5 microseconds.
&touro; Write to network - 5 microseconds.
As clearly shown in this example, significantly reducing latency means addressing the time it takes to look up the client's profile. A quick inspection shows that the client profile is loaded from a database and cached locally. Further testing shows that when the profile is in the local cache (a simple hash table), response time is usually under a microsecond, but when the cache is missed it takes several hundred milliseconds to load the profile. The average of 3.2 milliseconds was almost entirely the result of one very slow response (of about 320 milliseconds) caused by a cache miss. Likewise, the client's reported 3.6-millisecond average response time turns out to be a single very slow response (350 milliseconds) and 99 fast responses that took around 100 microseconds each.
Means and outliers.
Most systems exhibit some variance in latency from one event to the next. In some cases the variance (and especially the highest-latency outliers) drives the design, much more so than the average case. It is important to understand which statistical measure of latency is appropriate to the specific problem. For example, if you are building a trading system that earns small profits when the latency is below some threshold but incurs massive losses when latency exceeds that threshold, then you should be measuring the peak latency (or, alternatively, the percentage of requests that exceed the threshold) rather than the mean. On the other hand, if the value of the system is more or less inversely proportional to the latency, then measuring (and optimizing) the average latency makes more sense even if it means there are some large outliers.
What are you measuring?
Astute readers may have noticed that the latency measured inside the quote server application doesn't quite add up to the latency reported by the client application. That is most likely because they aren't actually measuring the same thing. Consider the following simplified pseudocode:
(In the client application)
for (int i = 0; i < 100; i++)
RequestMessage requestMessage = new RequestMessage(quoteRequest);
long sentTime = getSystemTime();
ResponseMessage responseMessage = receiveMessage();
long quoteLatency = getSystemTime() - sentTime;
(In the quote server application)
RequestMessage requestMessage = receive();
long receivedTime = getSystemTime();
QuoteRequest quoteRequest = parseRequest(requestMessage);
long parseTime = getSystemTime();
long parseLatency = parseTime - receivedTime;
ClientProfile profile = lookupClientProfile(quoteRequest. client);
long profileTime = getSystemTime();
long profileLatency = profileTime - parseTime;
Quote quote = computeQuote(profile);
long computeTime = getSystemTime();
long computeLatency = computeTime - profileTime;
long logTime = getSystemTime();
long logLatency = logTime - computeTime;
QuoteMessage quoteMessage = new QuoteMessage(quote);
long serializeTime = getSystemTime();
long serializationLatency = serializeTime - logTime;
long sentTime = getSystemTime();
long sendLatency = sentTime - serializeTime;
logStats(parseLatency, profileLatency, computeLatency,
logLatency, serializationLatency, sendLatency);
Note that the elapsed time measured by the client application includes the time to transmit the request over the network, as well as the time for the response to be transmitted back. The quote server, on the other hand, measures the time elapsed only from the arrival of the quote to when it is sent (or more precisely, when the send method returns). The 350-microsecond discrepancy between the average response time measured by the client and the equivalent measurement by the quote server could be caused by the network, but it might also be the result of delays within the client or server. Moreover, depending on the programming language and operating system, checking the system clock and logging the latency statistics may introduce material delays.
This approach is simplistic, but when combined with code-profiling tools to find the most commonly executed code and resource contention, it is usually good enough to identify the first (and often easiest) targets for latency optimization. It's important to keep this limitation in mind, though.
Measuring distributed systems latency via network traffic capture.
Distributed systems pose some additional challenges to latency measurement—as well as some opportunities. In cases where the system is distributed across multiple servers it can be hard to correlate timestamps of related events. The network itself can be a significant contributor to the latency of the system. Messaging middleware and the networking stacks of operating systems can be complex sources of latency.
At the same time, the decomposition of the overall system into separate processes running on independent servers can make it easier to measure certain interactions accurately between components of the system over the network. Many network devices (such as switches and routers) provide mechanisms for making timestamped copies of the data that traverse the device with minimal impact on the performance of the device. Most operating systems provide similar capabilities in software, albeit with a somewhat higher risk of delaying the actual traffic. Timestamped network-traffic captures (often called packet captures ) can be a useful tool to measure more precisely when a message was exchanged between two parts of the system. These measurements can be obtained without modifying the application itself and generally with very little impact on the performance of the system as a whole. (See wireshark and tcpdump.)
One of the challenges of measuring performance at short time scales across distributed systems is clock synchronization. In general, to measure the time elapsed from when an application on server A transmits a message to when the message reaches a second application on server B, it is necessary to check the time on A's clock when the message is sent and on B's clock when the message arrives, and then subtract those two timestamps to determine the latency. If the clocks on A and B are not in sync, then the computed latency will actually be the real latency plus the clock skew between A and B.
When is this a problem in the real world? Real-world drift rates for the quartz oscillators that are used in most commodity server motherboards are on the order of 10^-5, which means that the oscillator may be expected to drift by 10 microseconds each second. If uncorrected, it may gain or lose as much as a second over the course of a day. For systems operating at time scales of milliseconds or less, clock skew may render the measured latency meaningless. Oscillators with significantly lower drift rates are available, but without some form of synchronization, they will eventually drift apart. Some mechanism is needed to bring each server's local clock into alignment with some common reference time.
Developers of distributed systems should understand NTP (Network Time Protocol) at a minimum and are encouraged to learn about PTP (Precision Time Protocol) and usage of external signals such as GPS to obtain high-accuracy time synchronization in practice. Those who need time accuracy at the sub-microsecond scale will want to become familiar with hardware implementations of PTP (especially at the network interface) as well as tools for extracting time information from each core's local clock. (See tools. ietf/html/rfc1305, tools. ietf/html/rfc5905, nist. gov/el/isd/ieee/ieee1588.cfm, and queue. acm/detail. cfm? id=2354406.)
Abstraction versus Reality.
Modern software engineering is built upon abstractions that allow programmers to manage the complexity of ever-larger systems. Abstractions do this by simplifying or generalizing some aspect of the underlying system. This doesn't come for free, though—simplification is an inherently lossy process and some of the lost details may be important. Moreover, abstractions are often defined in terms of function rather than performance.
Somewhere deep below an application are electrical currents flowing through semiconductors and pulses of light traveling down fibers. Programmers rarely need to think of their systems in these terms, but if their conceptualized view drifts too far from reality they are likely to experience unpleasant surprises.
Four examples illustrate this point:
&touro; TCP provides a useful abstraction over UDP (User Datagram Protocol) in terms of delivery of a sequence of bytes. TCP ensures that bytes will be delivered in the order they were sent even if some of the underlying UDP datagrams are lost. The transmission latency of each byte (the time from when it is written to a TCP socket in the sending application until it is read from the corresponding receiving application's socket) is not guaranteed, however. In certain cases (specifically when an intervening datagram is lost) the data contained in a given UDP datagram may be delayed significantly from delivery to the application, while the missed data ahead of it is recovered.
&touro; Cloud hosting provides virtual servers that can be created on demand without precise control over the location of the hardware. An application or administrator can create a new virtual server "on the cloud" in less than a minute—an impossible feat when assembling and installing physical hardware in a data center. Unlike the physical server, however, the location of the cloud server or its location in the network topology may not be precisely known. If a distributed application depends on the rapid exchange of messages between servers, the physical proximity of those servers may have a significant impact on the overall application performance.
&touro; Threads allow developers to decompose a problem into separate sequences of instructions that can be allowed to run concurrently, subject to certain ordering constraints, and that can operate on shared resources (such as memory). This allows developers to take advantage of multicore processors without needing to deal directly with issues of scheduling and core assignment. In some cases, however, the overhead of context switches and passing data between cores can outweigh the advantages gained by concurrency.
&touro; Hierarchical storage and cache-coherency protocols allow programmers to write applications that use large amounts of virtual memory (on the order of terabytes in modern commodity servers), while experiencing latencies measured in nanoseconds when requests can be serviced by the closest caches. The abstraction hides the fact that the fastest memory is very limited in capacity (e. g., register files on the order of a few kilobytes), while memory that has been swapped out to disk may incur latencies in the tens of milliseconds.
Each of these abstractions is extremely useful but can have unanticipated consequences for low-latency applications. There are some practical steps to take to identify and mitigate latency issues resulting from these abstractions.
Messaging and Network Protocols.
The near ubiquity of IP-based networks means that regardless of which messaging product is in use, under the covers the data is being transmitted over the network as a series of discrete packets. The performance characteristics of the network and the needs of an application can vary dramatically—so one size almost certainly does not fit all when it comes to messaging middleware for latency-sensitive distributed systems.
There's no substitute for getting under the hood here. For example, if an application runs on a private network (you control the hardware), communications follow a publisher/subscriber model, and the application can tolerate a certain rate of data loss, then raw multicast may offer significant performance gains over any middleware based on TCP. If an application is distributed across very long distances and data order is not important, then a UDP-based protocol may offer advantages in terms of not stalling to resend a missed packet. If TCP-based messaging is being used, then it's worth keeping in mind that many of its parameters (especially buffer sizes, slow start, and Nagle's algorithm) are configurable and the "out-of-the-box" settings are usually optimized for throughput rather than latency (queue. acm/detail. cfm? id=2539132).
The physical constraint that information cannot propagate faster than the speed of light is a very real consideration when dealing with short time scales and/or long distances. The two largest stock exchanges, NASDAQ and NYSE, run their matching engines in data centers in Carteret and Mahwah, New Jersey, respectively. A ray of light takes 185 microseconds to travel the 55.4-km distance between these two locations. Light in a glass fiber with a refractive index of 1.6 and following a slightly longer path (roughly 65 km) takes almost 350 microseconds to make the same one-way trip. Given that the computations involved in trading decisions can now be made on time scales of 10 microseconds or less, signal propagation latency cannot be ignored.
Decomposing a problem into a number of threads that can be executed concurrently can greatly increase performance, especially in multicore systems, but in some cases it may actually be slower than a single-threaded solution.
Specifically, multithreaded code incurs overhead in the following three ways:
&touro; When multiple threads operate on the same data, controls are required to ensure that the data remains consistent. This may include acquisition of locks or implementations of read or write barriers. In multicore systems, these concurrency controls require that thread execution is suspended while messages are passed between cores. If a lock is already held by one thread, then other threads seeking that lock will need to wait until the first one is finished. If several threads are frequently accessing the same data, there may be significant contention for locks.
&touro; Similarly, when multiple threads operate on the same data, the data itself must be passed between cores. If several threads access the same data but each performs only a few computations on it, the time required to move the data between cores may exceed the time spent operating on it.
&touro; Finally, if there are more threads than cores, the operating system must periodically perform a context switch in which the thread running on a given core is halted, its state is saved, and another thread is allowed to run. The cost of a context switch can be significant. If the number of threads far exceeds the number of cores, context switching can be a significant source of delay.
In general, application design should use threads in a way that represents the inherent concurrency of the underlying problem. If the problem contains significant computation that can be performed in isolation, then a larger number of threads is called for. On the other hand, if there is a high degree of interdependency between computations or (worst case) if the problem is inherently serial, then a single-threaded solution may make more sense. In both cases, profiling tools should be used to identify excessive lock contention or context switching. Lock-free data structures (now available for several programming languages) are another alternative to consider (queue. acm/detail. cfm? id=2492433).
It's also worth noting that the physical arrangement of cores, memory, and I/O may not be uniform. For example, on modern Intel microprocessors certain cores can interact with external I/O (e. g., network interfaces) with much lower latency than others, and exchanging data between certain cores is faster than others. As a result, it may be advantageous explicitly to pin specific threads to specific cores (queue. acm/detail. cfm? id=2513149).
Hierarchical storage and cache misses.
All modern computing systems use hierarchical data storage—a small amount of fast memory combined with multiple levels of larger (but slower) memory. Recently accessed data is cached so that subsequent access is faster. Since most applications exhibit a tendency to access the same memory multiple times in a short period, this can greatly increase performance. To obtain maximum benefit, however, the following three factors should be incorporated into application design:
&touro; Using less memory overall (or at least in the parts of the application that are latency-sensitive) increases the probability that needed data will be available in one of the caches. In particular, for especially latency-sensitive applications, designing the app so that frequently accessed data fits within the CPU's caches can significantly improve performance. Specifications vary but Intel's Haswell microprocessors, for example, provide 32 KB per core for L1 data cache and up to 40 MB of shared L3 cache for the entire CPU.
&touro; Repeated allocation and release of memory should be avoided if reuse is possible. An object or data structure that is allocated once and reused has a much greater chance of being present in a cache than one that is repeatedly allocated anew. This is especially true when developing in environments where memory is managed automatically, as overhead caused by garbage collection of memory that is released can be significant.
&touro; The layout of data structures in memory can have a significant impact on performance because of the architecture of caches in modern processors. While the details vary by platform and are outside the scope of this article, it is generally a good idea to prefer arrays as data structures over linked lists and trees and to prefer algorithms that access memory sequentially since these allow the hardware prefetcher (which attempts to load data preemptively from main memory into cache before it is requested by the application) to operate most efficiently. Note also that data that will be operated on concurrently by different cores should be structured so that it is unlikely to fall in the same cache line (the latest Intel CPUs use 64-byte cache lines) to avoid cache-coherency contention.
A note on premature optimization.
The optimizations just presented should be considered part of a broader design process that takes into account other important objectives including functional correctness, maintainability, etc. Keep in mind Knuth's quote about premature optimization being the root of all evil; even in the most performance-sensitive environments, it is rare that a programmer should be concerned with determining the correct number of threads or the optimal data structure until empirical measurements indicate that a specific part of the application is a hot spot. The focus instead should be on ensuring that performance requirements are understood early in the design process and that the system architecture is sufficiently decomposable to allow detailed measurement of latency when and as optimization becomes necessary. Moreover (and as discussed in the next section), the most useful optimizations may not be in the application code at all.
Changes in Design.
The optimizations presented so far have been limited to improving the performance of a system for a given set of functional requirements. There may also be opportunities to change the broader design of the system or even to change the functional requirements of the system in a way that still meets the overall objectives but significantly improves performance. Latency optimization is no exception. In particular, there are often opportunities to trade reduced efficiency for improved latency.
Three real-world examples of design tradeoffs between efficiency and latency are presented here, followed by an example where the requirements themselves present the best opportunity for redesign.
In certain cases trading efficiency for latency may be possible, especially in systems that operate well below their peak capacity. In particular, it may be advantageous to compute possible outputs in advance, especially when the system is idle most of the time but must react quickly when an input arrives.
A real-world example can be found in the systems used by some firms to trade stocks based on news such as earnings announcements. Imagine that the market expects Apple to earn between $9.45 and $12.51 per share. The goal of the trading system, upon receiving Apple's actual earnings, would be to sell some number of shares Apple stock if the earnings were below $9.45, buy some number of shares if the earnings were above $12.51, and do nothing if the earnings fall within the expected range. The act of buying or selling stocks begins with submitting an order to the exchange. The order consists of (among other things) an indicator of whether the client wishes to buy or sell, the identifier of the stock to buy or sell, the number of shares desired, and the price at which the client wishes to buy or sell. Throughout the afternoon leading up to Apple's announcement, the client would receive a steady stream of market-data messages that indicate the current price at which Apple's stock is trading.
A conventional implementation of this trading system would cache the market-price data and, upon receipt of the earnings data, decide whether to buy or sell (or neither), construct an order, and serialize that order to an array of bytes to be placed into the payload of a message and sent to the exchange.
An alternative implementation performs most of the same steps but does so on every market-data update rather than only upon receipt of the earnings data. Specifically, when each market-data update message is received, the application constructs two new orders (one to buy, one to sell) at the current prices and serializes each order into a message. The messages are cached but not sent. When the next market-data update arrives, the old order messages are discarded and new ones are created. When the earnings data arrives, the application simply decides which (if either) of the order messages to send.
The first implementation is clearly more efficient (it has a lot less wasted computation), but at the moment when latency matters most (i. e., when the earnings data has been received), the second algorithm is able to send out the appropriate order message sooner. Note that this example presents application-level precomputation; there is an analogous process of branch prediction that takes place in pipelined processors which can also be optimized (via guided profiling) but is outside the scope of this article.
Keeping the system warm.
In some low-latency systems long delays may occur between inputs. During these idle periods, the system may grow "cold." Critical instructions and data may be evicted from caches (costing hundreds of nanoseconds to reload), threads that would process the latency-sensitive input are context-switched out (costing tens of microseconds to resume), CPUs may switch into power-saving states (costing a few milliseconds to exit), etc. Each of these steps makes sense from an efficiency standpoint (why run a CPU at full power when nothing is happening?), but all of them impose latency penalties when the input data arrives.
In cases where the system may go for hours or days between input events there is a potential operational issue as well: configuration or environmental changes may have "broken" the system in some important way that won't be discovered until the event occurs—when it's too late to fix.
A common solution to both problems is to generate a continuous stream of dummy input data to keep the system "warm." The dummy data needs to be as realistic as possible to ensure that it keeps the right data in the caches and that breaking changes to the environment are detected. The dummy data needs to be reliably distinguishable from legitimate data, though, to prevent downstream systems or clients from being confused.
It is common in many systems to process the same data through multiple independent instances of the system in parallel, primarily for the improved resiliency that is conferred. If some component fails, the user will still receive the result needed. Low-latency systems gain the same resiliency benefits of parallel, redundant processing but can also use this approach to reduce certain kinds of variable latency.
All real-world computational processes of nontrivial complexity have some variance in latency even when the input data is the same. These variations can be caused by minute differences in thread scheduling, explicitly randomized behaviors such as Ethernet's exponential back-off algorithm, or other unpredictable factors. Some of these variations can be quite large: page faults, garbage collections, network congestion, etc., can all cause occasional delays that are several orders of magnitude larger than the typical processing latency for the same input.
Running multiple, independent instances of the system, combined with a protocol that allows the end recipient to accept the first result produced and discard subsequent redundant copies, both provides the benefit of less-frequent outages and avoids some of the larger delays.
Stream processing and short circuits.
Consider a news analytics system whose requirements are understood to be "build an application that can extract corporate earnings data from a press release document as quickly as possible." Separately, it was specified that the press releases would be pushed to the system via FTP. The system was thus designed as two applications: one that received the document via FTP, and a second that parsed the document and extracted the earnings data. In the first version of this system, an open-source FTP server was used as the first application, and the second application (the parser) assumed that it would receive a fully formed document as input, so it did not start parsing the document until it had fully arrived.
Measuring the performance of the system showed that while parsing was typically completed in just a few milliseconds, receiving the document via FTP could take tens of milliseconds from the arrival of the first packet to the arrival of the last packet. Moreover, the earnings data was often present in the first paragraph of the document.
In a multistep process it may be possible for subsequent stages to start processing before prior stages have finished, sometimes referred to as stream-oriented or pipelined processing . This can be especially useful if the output can be computed from a partial input. Taking this into account, the developers reconceived their overall objective as "build a system that can deliver earnings data to the client as quickly as possible." This broader objective, combined with the understanding that the press release would arrive via FTP and that it was possible to extract the earnings data from the first part of the document (i. e., before the rest of the document had arrived), led to a redesign of the system.
The FTP server was rewritten to forward portions of the document to the parser as they arrived rather than wait for the entire document. Likewise, the parser was rewritten to operate on a stream of incoming data rather than on a single document. The result was that in many cases the earnings data could be extracted within just a few milliseconds of the start of the arrival of the document. This reduced overall latency (as observed by the client) by several tens of milliseconds without the internal implementation of the parsing algorithm being any faster.
Conclusão.
While latency requirements are common to a wide array of software applications, the financial trading industry and the segment of the news media that supplies it with data have an especially competitive ecosystem that produces challenging demands for low-latency distributed systems.
As with most engineering problems, building effective low-latency distributed systems starts with having a clear understanding of the problem. The next step is measuring actual performance and then, where necessary, making improvements. In this domain, improvements often require some combination of digging below the surface of common software abstractions and trading some degree of efficiency for improved latency.
LOVE IT, HATE IT? LET US KNOW.
Andrew Brook is the CTO of Selerity, a provider of realtime news, data, and content analytics. Previously he led development of electronic currency trading systems at two large investment banks and launched a pre-dot-com startup to deliver AI-powered scheduling software to agile manufacturers. His expertise lies in applying distributed, realtime systems technology and data science to real-world business problems. He finds Wireshark to be more interesting than PowerPoint.
&cópia de; 2015 ACM 1542-7730/14/0300 $10.00.
An apostate's opinion.
Ivan Beschastnikh, Patty Wang, Yuriy Brun, Michael D, Ernst - Debugging Distributed Systems.
Challenges and options for validation and debugging.
The accepted wisdom does not always hold true.
Lunchtime doubly so. - Ford Prefect to Arthur Dent in "The Hitchhiker's Guide to the Galaxy", by Douglas Adams.
Elios | Sat, 07 Nov 2015 09:29:52 UTC.
Thanks for the nice post. That's a great sum-up of problems in the design and implementation of distributed low latency systems.
I'm working on a distributed low-latency market data distribution system. In this system, one of the biggest challenge is how to measure its latency which is supposed to be several micro seconds.
In our previous system, the latency is measured in an end-to-end manner. We take timestamp in milli seconds on both publisher and subscriber side and record the difference between them. This works but we are aware that the result is not accurate because even with servers having clock synchronized with NTP, users complain sometimes that negative latency is observed.
Given we are reducing the latency to micro seconds, the end-to-end measurement seems to be too limited (it should be better with PTP but we can't force our users to support PTP in their infrastructure) and thus we are trying to get a round-trip latency. However, I can see immediately several cons with this method :
- extra complexity to configure and implement the system because we need to ensure two-way communication. - we can't deduce the end-to-end latency from the round trip one because the loads on both direction are not the same. (we want to send only some probes and get them back)
Do you have some experiences on the round-trip latency measurement and if so could you please share some best practices ?
Obter através da App Store Leia esta publicação em nosso aplicativo!
Programação de baixa latência.
Eu tenho lido muito sobre os sistemas financeiros de baixa latência (especialmente desde o famoso caso de espionagem corporativa) e a idéia de sistemas de baixa latência tem estado em mente desde então. Há um milhão de aplicativos que podem usar o que esses caras estão fazendo, então eu gostaria de aprender mais sobre o assunto. A coisa é que não consigo encontrar nada valioso sobre o assunto. Alguém pode recomendar livros, sites, exemplos de sistemas de baixa latência?
12 Respostas.
Eu trabalho para uma empresa financeira que produz software de baixa latência para comunicação diretamente com trocas (para envio de trades e preços de transmissão). Atualmente, desenvolvemos principalmente em Java. Embora o lado de baixa latência não seja uma área na qual eu trabalho diretamente, tenho uma idéia justa da qualificação exigida, o que inclui o seguinte na minha opinião:
Conhecimento detalhado do modelo e técnicas de memória Java para evitar a coleta de lixo desnecessária (por exemplo, agrupamento de objetos). Algumas das técnicas utilizadas geralmente podem ser consideradas como "anti-padrões" em um OO-ambiente tradicional. Conhecimento detalhado de multicast TCP / IP e UDP incluindo utilitários para depuração e medição de latência (por exemplo, DTrace no Solaris). Experiência em aplicações de perfil. Conhecimento do pacote java. nio, experiência no desenvolvimento de aplicativos de servidor escaláveis baseados em NIO, experiência na criação de protocolos de fio. Observe também que, normalmente, evitamos o uso de estruturas e bibliotecas externas (por exemplo, o Google Protobuf), preferindo escrever muito código personalizado. Conhecimento das bibliotecas FIX e FIX comerciais (por exemplo, Cameron FIX).
Infelizmente, muitas das habilidades só podem ser desenvolvidas "no trabalho", pois não há substituto para a experiência adquirida implementando um servidor de preços ou um mecanismo comercial baseado em uma especificação. de uma troca ou vendedor. No entanto, também vale a pena mencionar que nossa empresa, pelo menos, tende a não procurar uma experiência específica nessas áreas de nicho (ou outras), preferindo contratar pessoas com boas habilidades analíticas e de resolução de problemas.
A baixa latência é uma função de muitas coisas, sendo as duas mais importantes:
latência da rede - ou seja, o tempo gasto na rede para transmitir / receber mensagens. latência de processamento - ou seja, o tempo gasto pelo seu aplicativo para atuar em uma mensagem / evento.
Então, se você diz que está escrevendo um sistema de Correspondência de Pedidos, a latência da rede representaria o quão breve dentro da sua rede você conseguiu receber o pedido de correspondência de pedidos. E a latência de processamento representaria o tempo de sua aplicação para coincidir com a Ordem contra ordens abertas existentes.
Multicast, UDP, multicast confiável, Kernel bypass (suportado por Java 7, Informatica Ultra Messaging e muitos outros) nas redes Infiniband são algumas das tecnologias comuns utilizadas por todas as empresas neste campo.
Além disso, existem estruturas de programação de baixa latência como disruptor (code. google/p/disruptor/) que implementam padrões de design para lidar com aplicativos de baixa latência. O que poderia matá-lo é ter que escrever em um banco de dados ou arquivos de log como parte do seu fluxo de trabalho principal. Você terá que encontrar soluções únicas que atendam aos requisitos do problema que você está tentando resolver.
Em linguagens como Java, implementar seu aplicativo de forma que ele cria (quase) zero lixo torna-se extremamente importante para a latência. Como diz Adamski, ter um conhecimento do modelo de memória Java é extremamente importante. Compreenda as diferentes implementações da JVM e suas limitações. Os padrões típicos de design Java em torno da criação de pequenos objetos são as primeiras coisas que você vai jogar fora da janela - um nunca pode consertar o coletor de lixo Java o suficiente para alcançar baixa latência - o único que pode ser corrigido é o lixo.
Bem, não é apenas uma programação "tradicional" em tempo real, é tudo. Eu trabalho para uma bolsa de valores - a velocidade é rei. um problema típico é qual a maneira mais rápida de escrever em um arquivo? a maneira mais rápida de serializar um objeto? etc.
Qualquer coisa na programação em tempo real caberia na conta. Não é exatamente o que você está procurando, eu suspeito, mas é um lugar extremamente bom para começar.
Há muitas boas respostas nesta publicação. Eu gostaria de adicionar minha experiência também.
Para obter baixa latência em java, você tem que assumir o controle do GC em java, existem muitas maneiras de fazer isso, por exemplo, pré-alocar objetos (ou seja, usar padrão de design de peso mosca), usar objetos primitivos - é bom para isso, todos os dados A estrutura é baseada em primitiva, Reuse a instância do objeto, por exemplo, crie o dicionário do sistema inteiro para reduzir a criação de novos objetos, muito boa opção ao ler dados de stream / socket / db.
Tente usar algo sem espera (o que é um pouco difícil), bloquear algo livre. Você pode encontrar toneladas de exemplos para isso.
Use computação em memória. A memória é barata, você pode ter tera byte de dados na memória.
Se você pode dominar bit-wise algo, então dá um desempenho muito bom.
Use simpatia mecânica - Consulte o disruptor lmax, excelente estrutura.
Leia os whitepapers nesse site e você terá uma visão do que é necessário para a baixa latência.
Se você estiver interessado em desenvolver Java de baixa latência, você deve saber que ele pode ser feito sem JVM RTSJ (em tempo real) desde que você mantenha o coletor de lixo sob controle. Eu sugiro que você dê uma olhada neste artigo que fala sobre Desenvolvimento Java sem sobrecarga CG. Também temos muitos outros artigos em nosso site que falam sobre componentes Java de baixa latência.
Gostaria de comentar sobre programação de baixa latência. Atualmente tenho mais de 5 anos de experiência no desenvolvimento de baixa latência e motores de alta execução em software financeiro.
É necessário entender o que é latência?
Latência significa que precisa de tempo para completar seu processo. Não depende necessariamente das ferramentas de desenvolvimento que você está usando, como java, c ++, etc., depende de suas habilidades de programação e sistema.
Suponha que você esteja usando java, mas um erro pode fazer um atraso no processo. Por exemplo, você desenvolveu um aplicativo comercial em que, em cada atualização de preço, você chama algumas funções e assim por diante. Isso pode resultar em variáveis extras, uso de memória desnecessário, loops desnecessários que podem causar atraso no processo. O mesmo aplicativo desenvolvido pode ser melhor do que o java se o desenvolvedor se importasse com os erros acima.
Também depende do seu sistema de servidor, como o sistema multiprocessador pode funcionar bem se sua aplicação for multi-thread.
Se eu me lembro corretamente, em tempo real, Java (RTSJ) é usado nesta área, embora não consegui encontrar um artigo bom para vincular até agora.
Normalmente, trabalhar em ambientes de baixa latência significa ter uma compreensão das dependências de chamadas e como reduzi-las para minimizar a cadeia de dependência. Isso inclui o uso de estruturas de dados e bibliotecas para armazenar os dados armazenados em cache desejados, bem como refatorar recursos existentes para reduzir interdependências.
Комментарии
Отправить комментарий