API trading execution risks: India broker checklist
Latency is now a trading input, not a tech nuisance
Indian algo trading chatter has shifted from generic “API is slow” complaints to treating latency as something that directly changes realised P&L. The repeated point is that execution timing can change results even when signal logic is unchanged. Traders are framing brokerage API lag as a variable inside the strategy, not a background issue. The reason is simple in these discussions: a signal can become stale by the time the order reaches the market. When that happens, the same entry rule can turn into a worse fill or no fill. Many posts link small delays to slippage and missed fills in fast intraday setups. A frequently repeated community claim is that delays as small as 74 milliseconds can compound into meaningful losses across many trades. The practical conclusion is that “signal quality” has to be evaluated together with “execution quality”.
Tick-to-trade is a stacked pipeline where delays add up
Posts often describe latency as a pipeline from tick to trade, where multiple steps contribute to end-to-end delay. The chain typically starts with market data ingestion, then strategy logic, then order generation, then broker API calls. Traders are separating what they can control in their own stack from what sits inside broker infrastructure. One bottleneck cited in social threads is the round-trip hop over the PCIe bus between the network interface card and CPU in software systems. People also discuss runtime overheads in languages like Java, Python, or Go as a source of unpredictability. The range mentioned in these discussions runs from about 100 microseconds to over 50 milliseconds due to pauses such as garbage collection. Even the exchange adds internal time for matching and risk checks, often quoted as 200 to 800 microseconds. The takeaway is that two identical strategies can behave differently when the system is under load.
Why order-to-ack can matter more than ping
A repeated claim is that the broker is part of the execution chain, not a passive pipe. Orders pass through broker OMS and RMS checks before reaching the exchange, and these checks can add meaningful delay. Many posts argue that an overloaded or outdated broker stack can slow acknowledgements even when the client setup is clean. This is why traders talk about “order-to-ack” as a key metric, not just network latency. Order-to-ack is described as capturing OMS-RMS validation plus broker acknowledgement delays. Several discussions say this component can dominate real execution timing compared with network routing alone. Server location is also discussed, especially brokers with servers in NSE or BSE colocation facilities that shorten the path. In practice, this framing pushes traders to measure broker-side behaviour directly instead of relying on low-ping screenshots.
Average latency is not the risk point, the 99th percentile is
Threads repeatedly warn that average latency hides the worst outcomes. Traders focus on spikes and jitter rather than a single “mean latency” number. The 99th percentile is often described as the real risk point for an algo that runs continuously. Social discussions highlight that spikes show up when strategies are most active, which amplifies impact. The periods most often mentioned are the 9:15 am market open, expiry days, and sharp volatile moves. During those windows, execution timing is tied to queue priority and whether an order is still aligned to the signal. API timeouts are highlighted as a separate failure mode from normal slippage. A timeout can translate into missed exits, which traders treat as higher risk than a slightly worse fill.
Rate limits and “429” errors are treated as execution failures
Beyond speed, traders are discussing throughput ceilings as hard execution constraints. A shared constraint mentioned is an order rate limit of 10 orders per second via broker APIs. Exceeding the limit is repeatedly linked to “429 - Too Many Requests” errors. Community posts describe these errors as arriving at the worst time, such as during volatility or exits. The direct risk is not theoretical in these discussions: blocked orders can mean missed exits. Because of this, posters talk about building cushions rather than firing bursts. Retry logic is commonly discussed as a control, but it is also treated as something that can add delay. The overall message is that rate limits should be treated like a rule the strategy must respect, not a boundary to test in production.
Operational practices traders are adopting for high-activity sessions
Many retail traders mention moving execution from home broadband to a Mumbai-based VPS to cut distance and jitter. This is shared as a practical step rather than a performance flex, especially for intraday strategies. Posts also recommend measuring market-data latency separately from order acknowledgement latency. The logic is that the fix differs depending on which side is lagging. WebSocket stability is treated as a live-market test item, not a documentation checkbox. Traders also list operational failure points like broker downtime, network instability, and misconfigured webhooks. Because algos keep running, a single malfunction can scale quickly if it is not detected. Some traders propose forcing a minimum 0.5-second wait before executing trades during volatile conditions to reduce malfunction risk. The broader design goal is to plan for timeouts and partial fills, not just best-case fills.
SEBI’s retail algo timeline and static IP rule change the setup
Latency conversations are also being tied to compliance readiness under SEBI’s retail algo framework. Posts note that SEBI deferred implementation from August 1 to October 1, and then extended the timeline. Discussions suggest brokers ready with required systems may go live from October 1, while others follow a glide path. The shared milestone summary says that by October 31, brokers must submit at least one retail algo product via API and apply to register at least one strategy with exchanges. By November 30, registration of multiple retail algo products and strategies must be completed, as quoted in these threads. By January 3, 2026, brokers must participate in at least one mock session with fully compliant functionality. SEBI also cautioned that brokers missing milestones will be barred from onboarding new retail clients for API-based algo trading from January 5, 2026. Another repeatedly highlighted operational change is that API order execution must originate from a pre-approved static IP, typically one primary and one backup IP per account.
A practical go-live checklist for signal-to-execution failures
The trader-created checklists focus on controlling failure modes that show up in live markets. At the strategy layer, posts emphasise accounting for latency between signal and execution, because that gap can change whether the signal is valid. At the order-handling layer, traders stress correct handling of partial fills and rejections. At the infrastructure layer, they recommend planning for internet, broker, or exchange outages, not just local machine issues. At the API layer, community guidance is to stay within rate limits such as 10 orders per second and to design around “429” errors. At the monitoring layer, real-time alerts for order failures are treated as essential. Many threads also insist on a manual override or kill switch to stop trading instantly. Compliance readiness is now part of the operational checklist, including static IP whitelisting when required by the broker. The consistent theme is that stable execution under stress is valued more than the lowest latency number on a quiet day.
Frequently Asked Questions
Did your stocks survive the war?
See what broke. See what stood.
Live Q4 Earnings Tracker