How I Built My Telegram Alert Bot

H
Algo Trading · System Design

How I Built My Telegram Alert Bot Stack and What I Learned

When I started building algorithmic trading alert bots, I had one goal: get actionable signals to my phone faster than I could find them manually. What I ended up building taught me more about markets, systems thinking, and disciplined engineering than any trading course ever could.

It Started With a Simple Observation

Markets leave fingerprints. When price moves sharply in one direction but certain structural conditions haven’t resolved yet, there’s often a second move waiting. The question wasn’t whether the pattern existed. It was whether I could detect it consistently and get alerted fast enough to act.

The answer was to build a bot.

The Architecture Philosophy

Before writing a single line of code I made one decision that shaped everything: one bot, one job.

Each bot in my stack does exactly one thing. It has its own database, its own Telegram bot, its own deployment. Nothing is shared. If one service goes down the others keep running. If I need to update one I don’t touch the others.

This sounds simple but most people build monoliths. One giant script doing everything. When something breaks you don’t know where. When you want to change one thing you risk breaking five others.

Separate services gave me clarity. Every bot has a clear answer to: what does this do and what does it not do.

Real Time Data Is Non-Negotiable

The bots run on WebSocket connections. Persistent, live data streams direct from the exchange. Every price tick, every funding rate change, every open interest update arrives in milliseconds.

The alternative, polling REST APIs every few seconds, introduces lag. In fast markets, lag is the difference between a good entry and a bad one. WebSockets eliminated that problem entirely.

If your bot needs you to restart it, it’s not production ready.

Filters Are The Product

The most common mistake I see in signal bots is firing too many alerts. Every alert that doesn’t lead to a good trade erodes trust, yours and your subscribers’.

My bots run every potential signal through a multi-stage filter pipeline before anything reaches Telegram. Volume filters. Price move filters. Structural condition filters. Quality grade filters. Each filter has one job: eliminate signals that don’t meet the bar.

The result is fewer alerts. That’s intentional. A quiet bot in bad market conditions is working correctly. Silence is a signal too.

I never loosen filters to generate more activity. The filters are the product. Protecting them is protecting the edge.

Live Signal · What It Looks Like

FUSDT: funding/price divergence resolves in 8.4%

The scanner flagged FUSDT with price pushing higher while funding stayed positive and open interest climbed. Classic trapped-long setup: the crowd paying to hold the move up, no sellers needed to break it. The alert fired, the forwarder flipped the direction for subscribers, and price dumped 8.4% within the window.

The filter pipeline is what made that possible. Volume threshold, funding extremity, OI confirmation, grade floor. Each one a chance for the signal to get thrown out. The few that survive all four are the ones worth acting on.

Scanner: DRIFTSCOPE  ·  Direction: fade  ·  Grade: B  ·  Resolution: same session

The BTC Macro Filter Changed Everything

Early on the bots fired signals regardless of what Bitcoin was doing. Some of those signals worked. Many didn’t, because the macro environment was against them.

Adding a Bitcoin trend filter as a gate for certain signal types made an immediate difference. When Bitcoin is in a downtrend, long signals on altcoins face headwinds no amount of local structure can overcome.

The filter runs automatically. When Bitcoin’s trend changes the bots recalibrate on the next refresh cycle. No restart, no manual intervention. The system is self-healing by design.

State Persistence Saves You From Yourself

Bots get redeployed. Servers restart. Railway, my deployment platform, does rolling updates that cause brief downtime.

Every bot in my stack writes critical state to SQLite on every cycle. Current trend. Last signal timestamp. Active cooldowns. When the bot restarts it reads that state back and continues exactly where it left off. No missed crossovers. No duplicate alerts. No gaps in coverage.

If your bot loses its memory on restart it will eventually fire a duplicate alert at the worst possible moment. Persistence prevents that.

Automatic Data Collection

From the beginning I knew I’d eventually need data to validate whether the signals were actually working. I built data collection into every bot from day one.

Every signal that fires gets a complete record written to the database. Entry conditions, price levels, timestamp, signal quality metrics. As the trade plays out, the record updates automatically. When the signal resolves, the final outcome is written with no manual input required.

After enough signals accumulate the database answers questions the chart can’t. One came out of a KSM signal where the fade trade closed in profit but funding stayed extreme and open interest actually grew afterwards. In the old days I would have moved on. The record made it obvious the imbalance never resolved, which meant the original signal direction was still live. The second trade, taken in the opposite direction of the first, also closed in profit.

That two-phase behaviour isn’t something I would have seen watching a chart. The database surfaced it. Start collecting from day one, not after you wish you had it.

The Human In The Loop

The bots find setups. I trade them. Every signal I take personally before offering it to subscribers.

This isn’t inefficiency. It’s quality control. Real money on real trades surfaces problems that test environments never will. A routing bug discovered on a small trade is a lesson. The same bug discovered after going public is a disaster.

Automated systems don’t improve themselves. The human in the loop is what makes them get better.

Why I Let AI Do The Heavy Lifting

AI can do in seconds what takes a human analyst minutes. Pattern recognition across hundreds of symbols simultaneously, structural condition checks, catalyst research, signal grading. All of it runs faster and more consistently than any manual process. I don’t need to be glued to a screen watching candles form when a system can watch all 537 symbols at once and tell me exactly when something worth acting on appears.

My edge isn’t in staring harder at charts. It’s in building better systems and knowing how to interpret what they find.

The Tech Stack

Every tool in my stack was chosen for a specific reason, not because it was popular.

GitHub

Every bot has its own private repository. Version control means I can roll back any change that breaks production. Pushes to GitHub automatically trigger Railway deployments. No manual uploads, no FTP, no SSH.

Railway

Each bot is its own service with its own persistent volume for the database. Logs in real time. Deployments in under a minute. For a solo developer running multiple services simultaneously it removes the infrastructure overhead entirely.

Telegram

Every alert goes to a Telegram channel. Instant delivery, works on every device, bots are free to run, and the API is reliable. Subscribers are already there. No app to install, no platform to onboard them to.

Python + Node.js

Node.js handles the real time exchange connections. Fast, event-driven, and WebSocket-native. Python handles the more analytical work where its libraries and readability are better suited. The two languages complement each other rather than compete.

Who Writes The Code

It’s a collaboration.

I bring the idea, the trading logic, the market intuition. I write the initial structure, the skeleton of what the bot should do. AI completes it, refines it, debugs it, and improves it.

Roughly 50/50. Domain knowledge from me, implementation speed from the machine. Neither half works without the other. It’s just how modern development works when you’re a trader first and a developer second.

34 Bots Built. 7 Still Running.

34
Bots Built
7
Active Today
27
Retired

Over the course of this journey I’ve built more than 34 bots. Most are retired. That’s not failure. That’s iteration. Every retired bot taught me something. Some had logic that didn’t hold up in live markets. Some were made redundant when a better approach emerged. Some were experiments that answered a question and got shelved once the answer was clear.

The 7 currently in active service are the survivors. They made it because they proved themselves in live trading conditions, not because they looked good in backtests.

The graveyard of retired bots isn’t a collection of failures. It’s the foundation the active stack is built on.

What I’d Tell Someone Starting Today

  • 01
    Start simple. One bot, one signal type, one market. Get that working perfectly before adding complexity.
  • 02
    Build for failure. Assume your WebSocket will drop, your API will rate limit, your server will restart. Handle all of it gracefully. A bot that needs babysitting is a liability.
  • 03
    Protect your signal logic. Once you find something that works, treat detection logic as load-bearing. Alert formats, routing, and data collection are safe to iterate on freely. Detection only changes when a failure mode is clearly identified. Not when it’s been quiet, not because you’re bored, not to chase more signals.
  • 04
    Let the data speak. Don’t adjust thresholds based on one trade. Don’t loosen filters because it’s been quiet. Collect data, wait for statistical significance, then decide.
  • 05
    The silence is part of the system. A bot that only fires when conditions are right is more valuable than one that fires constantly. Train your subscribers to understand this. Train yourself first.
  • 06
    Use AI. Don’t spend hours doing what a system can do in seconds. Your value is in the decisions only you can make, not in the analysis a machine can run faster and more consistently than you ever could.
  • 07
    It’s rare that you’ll get it right the first time. Keep at it. Every bug is a lesson. Every bad signal is data. Every routing fix is an improvement. The bots that work today only work because of everything that went wrong before them. Expect the process to be messy. Build anyway.

Contact Me

The stack I run today looks nothing like what I built on day one. But every change was driven by real trades, real data, and real outcomes. That’s the only way to build something you can trust.

About the author

x2degen

Add comment

x2degen

About Me

a degen doubling bags. bitcoin holder since 2011. onchain analysis advocate. sharing my unfiltered thoughts.