Introduction
Welcome to the documentation for Argus, a powerful and flexible open-source monitoring tool for EVM-compatible blockchains.
Argus is designed to provide a sleepless, vigilant eye over on-chain activity, giving you the power to react to events in real-time. It serves as a critical piece of infrastructure for any project relying on or interacting with EVM chains.
What is Argus?
Argus is a self-hosted service that connects to an EVM node, processes new blocks as they are mined, and evaluates transactions and event logs against your custom-defined rules. When a rule is matched, Argus can trigger notifications or other automated workflows.
It is built with a few core principles in mind:
- Reliability at the Core: Built in Rust, Argus is designed for high-performance, concurrent, and safe operation, ensuring it can be a dependable part of your infrastructure.
- Deep Flexibility: At the heart of Argus is the Rhai scripting engine. This allows you to write expressive, powerful, and fine-grained filtering logic that goes far beyond simple "from/to" address matching. If you can express it in a script, you can monitor for it.
- API-First Design: Argus includes a REST API, allowing you to dynamically inspect monitor configurations without downtime.
- Stateful and Resilient: Argus tracks its progress in a local database, allowing it to gracefully handle restarts and resume monitoring exactly where it left off, ensuring no blocks are missed.
This documentation will guide you through installing Argus, configuring your first monitors, and mastering its powerful filtering capabilities.
Quick Start
This guide will walk you through the essential steps to configure and run your first Argus monitor using Docker Compose.
Prerequisites
Ensure you have completed the Docker installation steps, including cloning the repository, creating your .env
file, and creating the data
directory.
1. Review Application Configuration (app.yaml
)
The configs/app.yaml
file contains the core settings for the application. The most critical settings are the RPC endpoints and the initial starting block.
We set initial_start_block
to a negative offset. This tells Argus to start processing from a block that is slightly behind the absolute tip of the chain. This is a critical reliability feature to avoid issues with chain reorganizations (reorgs), where the most recent blocks can be altered. Starting from a slightly older, more "finalized" block ensures that the data Argus processes is stable and that no events are missed.
# configs/app.yaml
database_url: "sqlite:argus.db"
rpc_urls:
- "https://eth.llamarpc.com"
- "https://1rpc.io/eth"
network_id: "ethereum"
# Start 1000 blocks behind the chain tip to avoid issues with block reorganizations.
initial_start_block: -1000
# ... other settings
Note: The database_url
is relative to the container's working directory. The docker compose.yml
file mounts the local ./data
directory to /app
, so the database file will be created at ./data/argus.db
on your host machine.
2. Define a Monitor (monitors.yaml
)
The repository provides example configurations. Let's copy them to create your local, editable versions.
cp configs/monitors.example.yaml configs/monitors.yaml
cp configs/actions.example.yaml configs/actions.yaml
Now, open configs/monitors.yaml
. For this example, we'll use the pre-configured "Large ETH Transfers" monitor.
# configs/monitors.yaml
monitors:
- name: "Large ETH Transfers"
network: "ethereum"
filter_script: |
tx.value > ether(10)
actions:
- "my-webhook"
This monitor will trigger for any transaction on ethereum
where more than 10 ETH is transferred. It will send a notification using the my-webhook
action.
3. Configure an Action (actions.yaml
)
Finally, let's configure how we get notified. Open configs/actions.yaml
.
To receive alerts, you'll need a webhook endpoint. For testing, you can use a service like Webhook.site to get a temporary URL.
Update the url
in the my-webhook
action configuration with your actual webhook URL. Remember to use environment variables for secrets!
# configs/actions.yaml
actions:
- name: "my-webhook"
webhook:
url: "${WEBHOOK_URL}" # <-- SET THIS IN YOUR .env FILE
message:
title: "Large ETH Transfer Detected"
body: |
- **Amount**: {{ tx.value | ether }} ETH
- **From**: `{{ tx.from }}`
- **To**: `{{ tx.to }}`
- **Tx Hash**: `{{ transaction_hash }}`
Now, open your .env
file and add the WEBHOOK_URL
:
# .env
WEBHOOK_URL=https://webhook.site/your-unique-url
4. Run Argus
With the configuration in place, you are now ready to start the monitoring service.
Run the following command from the root of the project:
docker compose up -d
Argus will start, automatically run database migrations, connect to the RPC endpoint, and begin processing new blocks. When a transaction matches your filter (a transfer of >10 ETH), a notification will be sent to the webhook URL you configured.
You can view the application's logs with:
docker compose logs -f
To stop the service:
docker compose down
Installation
There are two primary ways to install and run Argus: using Docker (recommended for most users) or by building from source (ideal for developers and contributors).
Using Docker (Recommended)
The quickest and most reliable way to get Argus running is with Docker and Docker Compose. This approach isolates dependencies and simplifies the setup process.
Prerequisites
- Docker: Install Docker Engine and Docker Compose.
- Git: To clone the repository.
Setup
-
Clone the repository:
git clone https://github.com/isSerge/argus-rs cd argus-rs
-
Configure Secrets: Argus uses a
.env
file to manage secrets for actions. Copy the example file to create your own.cp .env.example .env
Now, open the
.env
file and fill in the required tokens and webhook URLs for the actions you plan to use. -
Create Data Directory: The Docker Compose setup is configured to persist the application's database in a local
data
directory.mkdir -p data
With these steps complete, you are ready to run the application. See the Quick Start guide for instructions on how to run your first monitor.
Building from Source (for local development)
This method is for users who want to build the project from source code, for example, to contribute to development or to run it without Docker.
Prerequisites
Before you begin, ensure you have the following tools installed on your system:
-
Rust: Argus is built in Rust. If you don't have the Rust toolchain installed, you can get it from the official rust-lang.org website. The standard installation via
rustup
is recommended.# Example installation command (see website for latest) curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Git: You will need Git to clone the repository.
-
sqlx-cli
: Argus usessqlx
for database migrations. You'll need to install the companion CLI tool to prepare the database. You can install it usingcargo
:cargo install sqlx-cli
Compiling
Once the prerequisites are in place, you can clone the repository and build the project.
-
Clone the repository:
git clone https://github.com/isSerge/argus-rs cd argus-rs
-
Build the project in release mode:
This command compiles an optimized binary for production use.
cargo build --release
The final binary will be located at
target/release/argus
.
Next Steps
With Argus successfully compiled, the next step is to set up your configuration files and prepare the database. This is covered in the Quick Start guide.
Configuration Overview
Argus is configured primarily through a set of YAML files. By default, the application looks for these files in a configs/
directory in the current working directory. You can specify a different directory using the --config-dir <path>
command-line argument.
The configuration is split into three key files:
-
app.yaml
: Contains global application settings, such as RPC endpoints, database connections, and performance tuning parameters. This is the main configuration for the Argus service itself. -
monitors.yaml
: This is where you define what you want to monitor on the blockchain. Each monitor specifies a network, an optional contract address, and a Rhai filter script that determines if a transaction or log is a match. -
actions.yaml
: This file defines how you want to further submit data when a monitor finds a match. You can configure various notification channels (e.g., webhooks) and set policies like throttling or send data to a queue (Kafka, NATS, etc.).
Select a topic below for a detailed breakdown of each file and its parameters.
app.yaml
Configuration (app_yaml.md
)- Monitors Configuration (
monitors_yaml.md
) - Actions Configuration (
actions_yaml.md
) - ABI Management (
config_abis.md
)
app.yaml
Configuration
The app.yaml
file defines the global settings for the service, including how it connects to the blockchain, how it stores data, etc.
Example app.yaml
# The connection string for the SQLite database.
database_url: "sqlite:data/monitor.db"
# A list of RPC endpoint URLs. Argus will cycle through these if one fails.
rpc_urls:
- "https://eth.llamarpc.com"
- "https://1rpc.io/eth"
- "https://rpc.mevblocker.io"
# A unique identifier for the network being monitored.
network_id: "ethereum"
# The directory where contract ABI JSON files are located.
abi_config_path: abis/
# Controls where Argus starts on a fresh database.
# Can be a block number (e.g., 18000000), 'latest', or a negative offset (e.g., -100).
initial_start_block: -100
# Performance and reliability settings.
block_chunk_size: 5
polling_interval_ms: 10000
confirmation_blocks: 12
# API server configuration
server:
enabled: true
listen_address: "0.0.0.0:8080"
Configuration Parameters
Core Settings
Parameter | Description | Default |
---|---|---|
database_url | The connection string for the SQLite database. This field is required. | (none) |
rpc_urls | A list of RPC endpoint URLs for the EVM network. Argus will use them in a fallback sequence if one fails. At least one URL is required. | (none) |
network_id | A unique identifier for the network being monitored (e.g., "ethereum", "sepolia"). This field is required. | (none) |
abi_config_path | The directory where contract ABI JSON files are located. | abis/ |
initial_start_block | Controls where Argus starts processing blocks on a fresh database. Can be an absolute block number (e.g., 18000000 ), a negative offset from the latest block (e.g., -100 ), or the string 'latest' . | -100 |
Performance & Reliability
Parameter | Description | Default |
---|---|---|
block_chunk_size | The number of blocks to fetch and process in a single batch. | 5 |
polling_interval_ms | The interval in milliseconds to poll for new blocks. | 10000 |
confirmation_blocks | Number of blocks to wait for before processing to protect against reorgs. A higher number is safer but introduces more latency. | 12 |
notification_channel_capacity | The capacity of the internal channel for sending notifications. | 1024 |
shutdown_timeout | The maximum time in seconds to wait for a graceful shutdown. | 30 |
aggregation_check_interval | The interval in seconds to check for aggregated matches for action with policies. | 5 |
Nested Configuration Sections
The following configurations are nested under their respective top-level keys in app.yaml
.
Server Settings (server
)
These settings control the built-in REST API server.
Default Configuration:
server:
enabled: false
listen_address: "0.0.0.0:8080"
Parameter | Description |
---|---|
enabled | Set to true to enable the API server. Defaults to false for security. |
listen_address | The address and port for the HTTP server to listen on. |
RPC Client Settings (rpc_retry_config
)
These settings control the behavior of the client used to communicate with the EVM RPC endpoints.
Default Configuration:
rpc_retry_config:
max_retry: 10
backoff_ms: 1000
compute_units_per_second: 100
Parameter | Description |
---|---|
max_retry | The maximum number of retries for a failing RPC request. |
backoff_ms | The initial backoff delay in milliseconds for RPC retries. |
compute_units_per_second | The number of compute units per second to allow (for rate limiting). |
HTTP Client Settings (http_retry_config
)
These settings control the retry behavior of the internal HTTP client, which is used for sending webhook notifications.
Default Configuration:
http_retry_config:
max_retries: 3
initial_backoff_ms: 250
max_backoff_secs: 10
base_for_backoff: 2
jitter: full
Parameter | Description |
---|---|
max_retries | The maximum number of retries for a failing HTTP request. |
initial_backoff_ms | The initial backoff delay in milliseconds for HTTP retries. |
max_backoff_secs | The maximum backoff delay in seconds for HTTP retries. |
base_for_backoff | The base for the exponential backoff calculation. |
jitter | The jitter to apply to the backoff (none or full ). |
Rhai Script Engine Settings (rhai
)
These settings provide guardrails for the Rhai scripts to prevent long-running or resource-intensive scripts from impacting the application's performance.
Default Configuration:
rhai:
max_operations: 100000
max_call_levels: 10
max_string_size: 8192
max_array_size: 1000
execution_timeout: 5000
Parameter | Description |
---|---|
max_operations | Maximum number of operations a script can perform. |
max_call_levels | Maximum function call nesting depth in a script. |
max_string_size | Maximum size of strings in characters. |
max_array_size | Maximum number of array elements. |
execution_timeout | Maximum execution time per script in milliseconds. |
Monitor Configuration (monitors.yaml)
Monitors are the core of Argus, defining what events to watch for on the blockchain and what actions to take when those events occur. This document explains how to configure your monitors.yaml
file.
Basic Structure
A monitors.yaml
file contains a list of monitor definitions. Each monitor has a name
, network
, filter_script
, and a list of actions
.
monitors:
- name: "Large ETH Transfers"
network: "ethereum"
filter_script: |
tx.value > ether(10)
actions:
- "Telegram Large ETH Transfers"
Monitor Fields
name
(string, required): A unique, human-readable name for the monitor.network
(string, required): The blockchain network this monitor should observe (e.g., "ethereum", "sepolia", "arbitrum"). This must correspond to a network configured in yourapp.yaml
.address
(string, optional): The contract address to monitor. If omitted, the monitor will process all transactions on the specifiednetwork
(useful for native token transfers). Set to"all"
to create a global log monitor that processes all logs on the network (requires anabi
). See Example 1: Basic ETH Transfer Monitor for an example without an address, and Example 4: All ERC20 Transfers for a Wallet for an example of global log monitoring.abi
(string, optional): The name of the ABI (Application Binary Interface) to use for decoding contract events. This name should correspond to a.json
file (without the.json
extension) located in theabis/
directory (or the directory configured for ABIs inapp.yaml
). Required iffilter_script
accesseslog
data. See ABI Management for more details and Example 2: Large USDC Transfer Monitor for an example.actions
(list of strings, required): A list of names of actions (defined inactions.yaml
) that should be triggered when this monitor'sfilter_script
returnstrue
.
Monitor Validation
Argus performs several validation checks on your monitor configurations at startup to ensure they are correctly defined and can operate as expected. If any validation fails, the application will not start and will report a detailed error.
Here are the key validation rules:
-
Network Mismatch: The
network
specified in a monitor must exactly match thenetwork_id
configured in yourapp.yaml
. -
Unknown Action: Every action name listed in a monitor's
actions
field must correspond to aname
defined in youractions.yaml
file. -
Invalid Address: If an
address
is provided, it must be a valid hexadecimal Ethereum address (e.g.,0x...
) or the special string"all"
for global log monitoring. -
Script Compilation: The
filter_script
must be valid Rhai code that compiles without errors. -
Script Static Analysis: Argus analyzes your Rhai script to prevent common logical errors before they can cause issues at runtime. This includes:
- Log Access without ABI: If your script accesses the
log
variable (e.g.,log.name
), the monitor must have anabi
field defined. - ABI Requirement: If an
abi
is specified, the corresponding ABI file must exist in the configuredabi_config_path
and be a valid JSON ABI. - Return Type: The script must evaluate to a boolean (
true
orfalse
). Argus will reject scripts that return other types. - Invalid Field Access: The script is checked for invalid field access (e.g.,
tx.foobar
,log.params.nonexistent
). The validator uses the provided ABI to ensure thatlog.params
access is valid.
- Log Access without ABI: If your script accesses the
Using the dry-run
CLI command is highly recommended to test your monitor configurations and scripts against historical data, which can help catch validation issues early.
Examples
For more detailed examples of monitor configurations, refer to the Example Gallery.
actions.yaml
Configuration
The actions.yaml
file defines how and where you receive alerts when a monitor's conditions are met. You can configure multiple actions for different services and purposes.
Each action has a unique name
, a type
(e.g., webhook
, slack
), and a set of configuration options specific to that type.
IMPORTANT: Managing Secrets
Action configurations, especially for services like Slack, Discord, or Telegram, often require secret URLs or API tokens.
Never commit secrets to version control.
This project is set up to help you. The
configs/actions.yaml
file is already listed in the project's.gitignore
file. This is a deliberate security measure to prevent you from accidentally committing sensitive information. When you copyactions.example.yaml
toactions.yaml
, your file with secrets will be ignored by Git automatically.
Common Action Structure
actions:
- name: "unique-action-name"
# Action type and its specific configuration
webhook:
url: "..."
# ...
# Optional policy to control notification frequency
policy:
# ...
Action Types
Webhook
The webhook
action sends a generic HTTP POST request to a specified URL. This is the most flexible action and can be used to integrate with a wide variety of services.
- name: "my-generic-webhook"
webhook:
# The URL of your webhook endpoint.
url: "https://my-service.com/webhook-endpoint"
# (Optional) The HTTP method to use. Defaults to "POST".
method: "POST"
# (Optional) A secret key to sign the request payload with HMAC-SHA256.
# Included in the `X-Signature` header.
secret: "your-super-secret-webhook-secret"
# (Optional) Custom headers to include in the request.
headers:
Authorization: "Bearer your-auth-token"
# The message to send. Both `title` and `body` support templating.
message:
title: "New Alert: {{ monitor_name }}"
body: "A match was detected on block {{ block_number }} for tx: {{ transaction_hash }}"
Slack
Sends a message to a Slack channel via an Incoming Webhook.
- name: "slack-notifications"
slack:
# Your Slack Incoming Webhook URL.
slack_url: "https://hooks.slack.com/services/T0000/B0000/XXXXXXXX"
message:
title: "Large USDC Transfer Detected"
body: |
A transfer of over 1,000,000 USDC was detected.
<https://etherscan.io/tx/{{ transaction_hash }}|View on Etherscan>
Discord
Sends a message to a Discord channel via a webhook.
- name: "discord-alerts"
discord:
# Your Discord Webhook URL.
discord_url: "https://discord.com/api/webhooks/0000/XXXXXXXX"
message:
title: "WETH Deposit Event"
body: "A new WETH deposit was detected for tx `{{ transaction_hash }}`."
Telegram
Sends a message to a Telegram chat via a bot.
- name: "telegram-updates"
telegram:
# Your Telegram Bot Token.
token: "0000:XXXXXXXX"
# The ID of the chat to send the message to.
chat_id: "-1000000000"
message:
title: "Large Native Token Transfer"
body: |
A transfer of over 10 ETH was detected.
[View on Etherscan](https://etherscan.io/tx/{{ transaction_hash }})
Stdout
Prints the notification to standard output (the console). This is primarily useful for local development, testing, and debugging.
If a message
template is provided, it will be rendered and printed. If message
is omitted, the full, raw MonitorMatch
JSON payload will be printed.
- name: "stdout-for-debugging"
stdout:
# Message is optional for stdout. If omitted, the full event JSON payload is printed.
message:
title: "Debug Event: {{ monitor_name }}"
body: "tx hash: {{ transaction_hash }}"
Kafka
The kafka
action sends the full MonitorMatch
JSON payload to a specified Apache Kafka topic.
- name: "kafka-action"
kafka:
# A comma-separated list of Kafka broker addresses.
brokers: "127.0.0.1:9092"
# The Kafka topic to publish messages to.
topic: "argus-alerts"
# (Optional) Security configuration for connecting to Kafka.
security:
protocol: "SASL_SSL"
sasl_mechanism: "PLAIN"
sasl_username: "${KAFKA_USERNAME}"
sasl_password: "${KAFKA_PASSWORD}"
ssl_ca_location: "/path/to/ca.crt"
# (Optional) Producer-specific configuration properties.
producer:
message_timeout_ms: 5000
compression_codec: "snappy"
acks: "all"
Configuration Details:
brokers
(string, required): A comma-separated list of Kafka broker addresses (e.g.,"broker1:9092,broker2:9092"
).topic
(string, required): The Kafka topic to publish messages to.security
(object, optional): Configuration for connecting to a secure Kafka cluster.protocol
(string): The security protocol to use. Common values arePLAINTEXT
,SSL
,SASL_PLAINTEXT
,SASL_SSL
. Defaults toPLAINTEXT
.sasl_mechanism
(string, optional): The SASL mechanism for authentication (e.g.,PLAIN
,SCRAM-SHA-256
).sasl_username
(string, optional): The username for SASL authentication. Supports environment variable expansion (e.g.,${KAFKA_USERNAME}
).sasl_password
(string, optional): The password for SASL authentication. Supports environment variable expansion.ssl_ca_location
(string, optional): Path to the CA certificate file for verifying the broker's certificate.
producer
(object, optional): Advanced configuration for the Kafka producer.message_timeout_ms
(integer): The maximum time in milliseconds to wait for a message to be sent. Defaults to5000
.compression_codec
(string): The compression codec to use. Common values arenone
,gzip
,snappy
,lz4
,zstd
. Defaults tonone
.acks
(string): The number of acknowledgments required before a request is considered complete. Can be0
,1
, orall
. Defaults toall
for maximum durability.
RabbitMQ
The rabbitmq
action sends the full MonitorMatch
JSON payload to a RabbitMQ exchange.
- name: "rabbitmq-action"
rabbitmq:
# The RabbitMQ connection URI.
uri: "amqp://guest:guest@127.0.0.1:5672/%2f"
# The name of the exchange to publish messages to.
exchange: "argus-alerts-exchange"
# The type of the exchange (e.g., "topic", "direct", "fanout").
exchange_type: "topic"
# The routing key to use for the message.
routing_key: "large.eth.transfers"
NATS
The nats
action sends the full MonitorMatch
JSON payload to a NATS subject.
- name: "nats-action"
nats:
# The NATS connection URL(s), comma-separated.
urls: "nats://127.0.0.1:4222"
# The subject to publish messages to.
subject: "argus.alerts"
# (Optional) Credentials for connecting to NATS.
credentials:
# (Optional) A token for authentication.
token: "${NATS_TOKEN}"
# (Optional) Path to a credentials file (.creds).
file: "/path/to/user.creds"
Notification Policies
Policies allow you to control the rate and structure of your notifications, helping to reduce noise and provide more meaningful alerts.
Throttle Policy
The throttle
policy limits the number of notifications sent within a specified time window. This is useful for high-frequency events.
- name: "discord-with-throttling"
discord:
# ... discord config
policy:
throttle:
# Max notifications to send within the time window.
max_count: 5
# The duration of the time window in seconds.
time_window_secs: 60 # 5 notifications per minute
Aggregation Policy
The aggregation
policy collects all matches that occur within a time window and sends a single, consolidated notification. This is ideal for summarizing events. When using an aggregation policy, you can leverage custom filters like map
, sum
, and avg
on the matches
array to perform calculations.
- name: "slack-with-aggregation"
slack:
# ... slack config
policy:
aggregation:
# The duration of the aggregation window in seconds.
window_secs: 300 # 5 minutes
# The template for the aggregated notification.
# This template has access to a `matches` array, which contains all the
# `MonitorMatch` objects collected during the window.
template:
title: "Event Summary for {{ monitor_name }}"
body: |
Detected {{ matches | length }} events by monitor {{ monitor_name }}.
Total value: {{ matches | map(attribute='log.params.value') | sum | wbtc }} WBTC
Average value: {{ matches | map(attribute='log.params.value') | avg | wbtc }} WBTC
In this example:
matches | length
counts the number of aggregated events.matches | map(attribute='log.params.value')
extracts thelog.params.value
(which is aBigInt
string) from each match in thematches
array.sum
calculates the total of the extracted values.avg
calculates the average of the extracted values.wbtc
is a critical custom filter that converts theBigInt
string value into a decimal representation (e.g., WBTC units) before mathematical operations are performed. Without this conversion,sum
andavg
would operate on the rawBigInt
strings, leading to incorrect results.
Templating
Action messages support Jinja2 templating, allowing for dynamic content based on the detected blockchain events. For a comprehensive guide on available data, conversion filters, and examples, refer to the Action Templating documentation.
ABI Management
To decode event logs from smart contracts, Argus needs access to the contract's Application Binary Interface (ABI). This section explains how to provide and manage ABIs for your monitors.
What is an ABI?
An ABI (Application Binary Interface) is a JSON file that describes a smart contract's public interface, including its functions and events. Argus uses the ABI to parse the raw log data emitted by a contract into a human-readable format that you can access in your Rhai scripts (e.g., log.name
, log.params
).
You only need to provide an ABI if your monitor needs to inspect event logs (i.e., if your filter_script
accesses the log
variable).
How Argus Finds ABIs
-
ABI Directory: In your
app.yaml
, you specify the path to your ABI directory using theabi_config_path
parameter (default isabis/
). -
JSON Files: Argus expects to find ABI files in this directory with a
.json
extension. -
Naming Convention: When you define a monitor in
monitors.yaml
that needs an ABI, you set theabi
field to the name of the JSON file without the extension.
Example Workflow
Let's say you want to monitor Transfer
events from the USDC contract.
-
Get the ABI: First, you need the ABI for the USDC contract. You can usually get this from the contract's page on a block explorer like Etherscan. Save it as a JSON file.
For proxy contracts (like USDC), you will need the ABI of the implementation contract, not the proxy itself.
-
Save the ABI File: Save the ABI file into your configured ABI directory. For this example, you would save it as
abis/usdc.json
. -
Reference in Monitor: In your
monitors.yaml
, reference the ABI by its filename (without the.json
extension).# monitors.yaml monitors: - name: "Large USDC Transfers" network: "ethereum" address: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" # This tells Argus to load `abis/usdc.json` to decode logs for this monitor. abi: "usdc" filter_script: | log.name == "Transfer" && log.params.value > usdc(1_000_000) actions: - "my-webhook"
When Argus starts, it will load all the .json
files from the abi_config_path
directory. When the "Large USDC Transfers" monitor runs, it will use the pre-loaded usdc
ABI to decode any event logs from the specified contract address.
Action Templating
Argus allows you to customize the content of your notifications using templating. This enables dynamic messages that include details from the matched blockchain events.
How Templating Works
When a monitor's filter_script
matches an event, the relevant data (e.g., transaction details, log data) is passed to the configured actions. actions that support templating can then use this data to render a custom message.
Templates are typically defined within your actions.yaml
file as part of the action's message
configuration. The exact syntax and available variables will depend on the specific action type and the templating engine it uses (e.g., Handlebars-like syntax for basic fields, or more advanced templating for body
fields).
Available Data for Templating
All notification templates have access to the following top-level fields from the MonitorMatch
object:
monitor_id
: The unique ID of the monitor that triggered the match (integer).monitor_name
: The human-readable name of the monitor (string).action_name
: The name of the action handling this match (string).block_number
: The block number where the match occurred (integer).transaction_hash
: The hash of the transaction associated with the match (string).
Additionally, the following structured objects are always available:
-
tx
: An object containing details about the transaction. This will benull
if the match is log-based.tx.from
: The sender address of the transaction (string).tx.to
: The recipient address of the transaction (string, can be null for contract creations).tx.hash
: The transaction hash (string).tx.value
: The value transferred in the transaction (string, as a large number).tx.gas_limit
: The gas limit for the transaction (integer).tx.nonce
: The transaction nonce (integer).tx.input
: The transaction input data (string).tx.block_number
: The block number where the transaction was included (integer).tx.transaction_index
: The index of the transaction within its block (integer).tx.gas_price
: (Legacy transactions) The gas price (string, as a large number).tx.max_fee_per_gas
: (EIP-1559 transactions) The maximum fee per gas (string, as a large number).tx.max_priority_fee_per_gas
: (EIP-1559 transactions) The maximum priority fee per gas (string, as a large number).tx.gas_used
: (From receipt) The gas used by the transaction (string, as a large number).tx.status
: (From receipt) The transaction status (integer, 1 for success, 0 for failure).tx.effective_gas_price
: (From receipt) The effective gas price (string, as a large number).
-
log
: An object containing details about the log/event. This will benull
if the match is transaction-based.log.address
: The address of the contract that emitted the log (string).log.log_index
: The index of the log within its block (integer).log.name
: The name of the decoded event (string, e.g., "Transfer").log.params
: A map of the event's decoded parameters (e.g.,log.params.from
,log.params.to
,log.params.value
).
Data Types, Conversions, and Custom Filters in Templates
When displaying blockchain data in Jinja2 templates, especially large numerical values, it's essential to use the provided custom filters for proper formatting and mathematical operations.
Handling Large Numbers (BigInts)
Similar to Rhai scripts, large numerical values from the EVM (e.g., tx.value
, log.params.value
) are passed to templates as BigInt
strings. While these raw string values will be displayed if no conversion is applied, it is highly recommended to use conversion filters for a more user-friendly and accurate representation, especially when performing calculations.
Conversion and Utility Filters
Argus provides several custom Jinja2 filters to help you work with blockchain data:
-
ether
: Converts aBigInt
string (representing Wei) into its equivalent decimal value in Ether (18 decimal places).{{ tx.value | ether }} ETH
Example: If
tx.value
is"1500000000000000000"
, the expression{{ tx.value | ether }}
renders1.5
, and the full template would output1.5 ETH
. -
gwei
: Converts aBigInt
string (representing Wei) into its equivalent decimal value in Gwei (9 decimal places).{{ tx.gas_price | gwei }} Gwei
Example: If
tx.gas_price
is"20000000000"
,{{ tx.gas_price | gwei }}
renders20.0
, and the full template would output20.0 Gwei
. -
usdc
: Converts aBigInt
string into its equivalent decimal value for USDC (6 decimal places).{{ log.params.value | usdc }} USDC
Example: If
log.params.value
is"50000000"
,{{ log.params.value | usdc }}
renders50.0
, and the full template would output50.0 USDC
. -
wbtc
: Converts aBigInt
string into its equivalent decimal value for WBTC (8 decimal places).{{ log.params.value | wbtc }} WBTC
Example: If
log.params.value
is"100000000"
,{{ log.params.value | wbtc }}
renders1.0
, and the full template would output1.0 WBTC
. -
decimals(num_decimals)
: A generic filter to convert aBigInt
string into a decimal value with a specified number of decimal places.{{ log.params.tokenAmount | decimals(18) }}
Example: If
log.params.tokenAmount
is"123450000000000000000"
,{{ log.params.tokenAmount | decimals(18) }}
renders123.45
. -
map(attribute)
: Extracts a specificattribute
from each item in an array. This is particularly useful with aggregation policies to get a list of values forsum
oravg
.{{ matches | map(attribute='log.params.value') }}
-
sum
: Calculates the sum of a list of numerical values. Often used in conjunction withmap
and a conversion filter.{{ matches | map(attribute='log.params.value') | sum | wbtc }} WBTC
-
avg
: Calculates the average of a list of numerical values. Often used in conjunction withmap
and a conversion filter.{{ matches | map(attribute='log.params.value') | avg | wbtc }} WBTC
Important: Always apply the appropriate conversion filter (e.g., ether
, usdc
, wbtc
, decimals
) to BigInt
strings before performing mathematical operations like sum
or avg
to ensure accurate results. For example, {{ matches | map(attribute='log.params.value') | sum | wbtc }}
correctly sums the values and then formats them as WBTC.
Example: Generic Webhook Action
actions:
- name: "my-generic-webhook"
webhook:
url: "https://my-service.com/webhook-endpoint"
method: "POST"
message:
title: "New Transaction Alert: {{ monitor_name }}"
body: |
A new match was detected for monitor {{ monitor_name }}.
- **Block Number**: {{ block_number }}
- **Transaction Hash**: {{ transaction_hash }}
- **Contract Address**: {{ log.address }}
- **Log Index**: {{ log.log_index }}
- **Log Name**: {{ log.name }}
- **Log Params**: {{ log.params }}
In this example, {{ monitor_name }}
, {{ block_number }}
, {{ transaction_hash }}
, {{ log.address }}
, {{ log.log_index }}
, {{ log.name }}
, and {{ log.params }}
are placeholders that will be replaced with the actual data at the time of notification.
Example: Slack Notification with Aggregation Policy
When using an aggregation
policy, the template has access to a matches
array, which contains all the MonitorMatch
objects collected during the aggregation window.
actions:
- name: "slack-with-aggregation"
slack:
slack_url: "https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX"
message:
title: "Aggregated Event Summary"
body: "This is a summary of events."
policy:
aggregation:
window_secs: 300 # 5 minutes
template:
title: "Event Summary for {{ monitor_name }}"
body: |
In the last 5 minutes there have been {{ matches | length }} new events
{% for match in matches %}
- Tx: {{ match.transaction_hash }} (Block: {{ match.block_number }})
{% endfor %}
Here, {{ matches | length }}
is used to get the count of aggregated matches, and a {% for match in matches %}
loop iterates through each individual match to display its transaction hash and block number.
Refer to the documentation for specific action types to understand their full templating capabilities and the exact syntax to use.
For more practical examples of action templating, including aggregation policies, refer to the Example Gallery.
Writing Rhai Scripts
Argus is using the Rhai scripting language for its filtering logic. Each monitor you define has a filter_script
that determines whether a given transaction or log should trigger a notification.
The filter_script
The filter_script
is a short piece of Rhai code that must evaluate to a boolean (true
or false
).
- If the script evaluates to
true
, the monitor is considered a "match," and a notification is sent to its configured actions. - If the script evaluates to
false
, no action is taken.
Example: Simple ETH Transfer
This script triggers if a transaction's value is greater than 10 ETH.
tx.value > ether(10)
Example: Specific ERC20 Transfer Event
This script triggers if a log is a Transfer
event and the value
parameter of that event is greater than 1,000,000 USDC.
log.name == "Transfer" && log.params.value > usdc(1_000_000)
Data Types, Conversions, and Custom Filters in Rhai
When working with blockchain data in Rhai scripts, it's crucial to understand how data types are handled, especially for large numerical values, and how to use conversion functions for accurate comparisons and calculations.
Handling Large Numbers (BigInts)
EVM-compatible blockchains often deal with very large numbers (e.g., token amounts in Wei, Gwei, or other base units) that exceed the capacity of standard 64-bit integers. In Argus's Rhai environment, these large numbers are typically represented as BigInt
strings.
To perform mathematical comparisons or operations on these BigInt
strings, you must use the provided conversion helper functions. These functions convert the BigInt
string into a comparable decimal representation.
Conversion Helper Functions
Argus provides several built-in helper functions to facilitate these conversions:
-
ether(amount)
: Converts a decimalamount
(e.g.,10.5
) into its equivalentBigInt
string in Wei (18 decimal places).tx.value > ether(10) // Checks if tx.value (BigInt string) is greater than 10 ETH
-
gwei(amount)
: Converts a decimalamount
into its equivalentBigInt
string in Gwei (9 decimal places).tx.gas_price > gwei(50) // Checks if gas_price (BigInt string) is greater than 50 Gwei
-
usdc(amount)
: Converts a decimalamount
into its equivalentBigInt
string for USDC (6 decimal places).log.params.value > usdc(1_000_000) // Checks if log.params.value (BigInt string) is greater than 1,000,000 USDC
-
wbtc(amount)
: Converts a decimalamount
into its equivalentBigInt
string for WBTC (8 decimal places).log.params.value > wbtc(0.5) // Checks if log.params.value (BigInt string) is greater than 0.5 WBTC
-
decimals(amount, num_decimals)
: A generic function to convert a decimalamount
into aBigInt
string with a specified number of decimal places.log.params.tokenAmount > decimals(100, 0) // Checks if tokenAmount (BigInt string) is greater than 100 with 0 decimals
Important: Always use these conversion functions when comparing or performing arithmetic with tx.value
, log.params.value
, or other BigInt
string representations of token amounts to ensure correct logic. If you are unsure when to apply a conversion, use the dry-run
feature to verify your script works as expected.
The Data Context
Within your script, you have access to a rich set of data about the on-chain event. The data available depends on the type of monitor you have configured.
- For all monitors, you have access to the
tx
object, which contains details about the transaction. - For monitors with an
address
andabi
, you also have access to thelog
object, which contains the decoded event log data.
For a detailed breakdown of the available data, see the following pages:
Scripting Best Practices
- Keep it Simple: Your script is executed for every relevant transaction or log. Keep it as simple and efficient as possible.
- Be Specific: The more specific your filter, the fewer false positives you'll get.
- Use Helpers: Use the provided helper functions (
ether
,usdc
,gwei
,decimals
) to handle token amounts. This makes your scripts more readable and less error-prone. - Test with
dry-run
: Before deploying a new or modified monitor, use thedry-run
CLI command to test your script against historical data. This will help you verify that it's working as expected.
For more practical examples of Rhai scripts, refer to the Example Gallery.
Rhai Data Context (tx
& log
)
When your filter_script
is executed, it has access to a set of variables that contain the context of the on-chain event. This context is primarily exposed through the tx
and log
objects.
The tx
Object (Transaction Data)
The tx
object is available in all monitor scripts. It contains detailed information about the transaction being processed.
Field | Type | Description |
---|---|---|
hash | String | The transaction hash. |
from | String | The sender's address. |
to | String | The recipient's address (can be null for contract creation). |
value | BigInt | The amount of native currency (e.g., ETH) transferred, in wei. |
gas_limit | Integer | The gas limit for the transaction. |
nonce | Integer | The transaction nonce. |
input | String | The transaction input data (calldata). |
block_number | Integer | The block number the transaction was included in. |
transaction_index | Integer | The index of the transaction within the block. |
gas_price | BigInt | (Legacy Transactions) The gas price in wei. |
max_fee_per_gas | BigInt | (EIP-1559) The maximum fee per gas in wei. |
max_priority_fee_per_gas | BigInt | (EIP-1559) The maximum priority fee per gas in wei. |
gas_used | BigInt | (From Receipt) The actual gas used by the transaction. |
status | Integer | (From Receipt) The transaction status (1 for success, 0 for failure). |
effective_gas_price | BigInt | (From Receipt) The effective gas price paid, in wei. |
Example Usage
// Check for a transaction from a specific address with a high value
tx.from == "0x1234..." && tx.value > ether(50)
// Check for a failed transaction
tx.status == 0
The log
Object (Decoded Event Log)
The log
object is only available for monitors that have an address
and an abi
defined. It contains the decoded data from a specific event log emitted by that contract.
Field | Type | Description |
---|---|---|
address | String | The address of the contract that emitted the log. |
log_index | Integer | The index of the log within the block. |
name | String | The name of the decoded event (e.g., "Transfer", "Approval"). |
params | Map | A map containing the event's parameters, accessed by name. |
The log.params
Map
The params
field is the most important part of the log
object. It allows you to access the event's parameters by their names as defined in the ABI.
For an ERC20 Transfer
event with the signature Transfer(address indexed from, address indexed to, uint256 value)
, the log.params
map would contain:
log.params.from
(String)log.params.to
(String)log.params.value
(BigInt)
Example Usage
// Check for a specific event
log.name == "Transfer"
// Check for an event with specific parameter values
log.name == "Transfer" && log.params.from == "0xabcd..."
// Check for a large transfer value
log.name == "Transfer" && log.params.value > usdc(1_000_000)
For more practical examples of using tx
and log
data in Rhai scripts, refer to the Example Gallery.
The decoded_call
Object (Decoded Calldata)
The decoded_call
object is available for monitors that have an address
, an abi
, and access decoded_call
field, which contains the decoded data from the transaction's input data (calldata).
If calldata cannot be decoded (e.g., the function selector is unknown), decoded_call
will be ()
, which is Rhai's null
equivalent.
Field | Type | Description |
---|---|---|
name | String | The name of the decoded function (e.g., "transfer", "approve"). |
params | Map | A map containing the function's parameters, accessed by name. |
The decoded_call.params
Map
Similar to log.params
, this field allows you to access the function's parameters by their names as defined in the ABI.
For a transfer
function with the signature transfer(address to, uint256 amount)
, the decoded_call.params
map would contain:
decoded_call.params.to
(String)decoded_call.params.amount
(BigInt)
Example Usage
// Check for a call to a specific function, even if decoded_call might be null.
decoded_call.name == "transfer"
// Safely check a nested parameter.
decoded_call.params.amount > ether(100)
// Check if calldata decoding failed.
decoded_call == ()
Rhai Helper Functions
To make writing filter scripts easier and less error-prone, Argus provides a set of built-in helper functions. These functions are primarily designed to simplify the handling of large numbers and different token denominations.
All numeric values from the EVM, such as transaction values and token amounts, are represented as BigInt
types in Rhai to avoid precision loss. These helpers allow you to work with them in a more natural way.
Denomination Helpers
These functions convert a human-readable number into its wei-equivalent BigInt
based on the token's decimal places.
ether(value)
Converts a number into a BigInt
with 18 decimal places. Useful for ETH and other 18-decimal tokens (e.g., WETH).
value
: An integer or float.
Example:
ether(10)
is equivalent to bigint("10000000000000000000")
.
// Check if more than 10 ETH was transferred
tx.value > ether(10)
gwei(value)
Converts a number into a BigInt
with 9 decimal places.
value
: An integer or float.
Example:
gwei(20)
is equivalent to bigint("20000000000")
.
// Check if the gas price is over 20 gwei
tx.gas_price > gwei(20)
usdc(value)
Converts a number into a BigInt
with 6 decimal places. Specifically for USDC and other 6-decimal stablecoins.
value
: An integer or float.
Example:
usdc(1_000_000)
is equivalent to bigint("1000000000000")
.
// Check for a USDC transfer of over 1,000,000
log.params.value > usdc(1_000_000)
Generic Decimal Helper
decimals(value, places)
This is a generic version of the denomination helpers that allows you to specify the number of decimal places. This is useful for working with any ERC20 token.
value
: An integer or float.places
: The number of decimal places for the token (integer).
Example:
If you are monitoring a token with 8 decimal places, you can use decimals
to correctly scale the value.
// For a token with 8 decimal places, check for a transfer of over 5,000
log.params.value > decimals(5000, 8)
BigInt Helper
bigint(value)
Converts a string into a BigInt
. This is useful for representing very large numbers that might not fit in a standard integer type.
value
: A string representing a large integer.
Example:
// A very large number
let threshold = bigint("5000000000000000000000");
tx.value > threshold
For more practical examples of using Rhai helper functions, refer to the Example Gallery.
Example Gallery
This section contains a gallery of complete, working examples that you can use as a starting point for your own monitors. Each example includes all necessary configuration files and a detailed explanation in its README.md
.
The source for all examples can be found in the /examples
directory of the repository.
1. Basic ETH Transfer Monitor
Monitors for native ETH transfers greater than a specific value. A great starting point for understanding transaction-based filtering.
Features Demonstrated: tx.value
, ether()
helper, basic action.
2. Large USDC Transfer Monitor
Monitors for Transfer
events from a specific ERC20 contract (USDC) above a certain amount. Introduces event-based filtering.
Features Demonstrated: log.name
, log.params
, address
and abi
fields, usdc()
helper.
3. WETH Deposit Monitor
Monitors for Deposit
events from the WETH contract, combining event and transaction data in the filter.
Features Demonstrated: Combining log.*
and tx.*
variables.
4. All ERC20 Transfers for a Wallet
Demonstrates a powerful global log monitor (address: 'all'
) to catch all Transfer
events involving a specific wallet, regardless of the token.
Features Demonstrated: Global log monitoring.
5. Action with Throttling Policy
Shows how to configure a action with a throttle
policy to limit the rate of notifications and prevent alert fatigue.
Features Demonstrated: Action policies.
6. Action with Aggregation Policy
Demonstrates how to use aggregation
policy for actions as well as sum
and avg
filters in templates to aggregate values from multiple monitor matches.
Features Demonstrated: Aggregation policy, map
, sum
, avg
filters.
7. Address Watchlist Monitor
Shows how to use a Rhai array as a watchlist to get notifications for any transaction involving a specific set of addresses.
Features Demonstrated: Rhai arrays, let
variables, in
operator.
8.High Priority Fee
Demonstrates how to monitor for transactions with unusually high priority fees, which can be an indicator of MEV (Maximal Extractable Value) activity, front-running, or other urgent on-chain actions.
9. Admin Function Call Monitor
Demonstrates how to monitor for calls to a specific function on a contract using decode_calldata
feature. This is the recommended approach for monitoring critical or administrative functions
Features Demonstrated: decode_calldata
.
10. Action with Kafka Publisher
Demonstrates how to configure a kafka
action to send notifications to a Kafka topic, ideal for integrating with data streaming platforms.
Features Demonstrated: kafka
action.
11. Action with RabbitMQ Publisher
Shows how to set up a rabbitmq
action to publish notifications to a RabbitMQ exchange for integration with message queue-based systems.
Features Demonstrated: rabbitmq
action.
12. Action with NATS Publisher
Demonstrates how to configure a nats
action to send notifications to a NATS subject for real-time, cloud-native messaging.
Features Demonstrated: nats
action.
Deployment
This guide covers deploying Argus using Docker, which is the recommended method for production and development environments.
Docker Deployment
The provided Dockerfile
and docker compose.yml
are designed to make deployment straightforward and portable.
Building the Docker Image
The repository includes a multi-stage Dockerfile
that uses cargo-chef
to optimize build times by caching dependencies.
To build the image manually, run the following command from the project root:
docker build -t argus-rs .
The GitHub repository is also configured with a GitHub Action to automatically build and publish a multi-platform (linux/amd64
, linux/arm64
) image to the GitHub Container Registry (GHCR) on every push to the main
branch.
Running with Docker Compose (Recommended)
The docker compose.yml
file is the easiest way to run the application.
Setup
- Create a
.env
file: Copy the.env.example
to.env
and fill in your action secrets (API tokens, webhook URLs, etc.).cp .env.example .env
- Create a
data
directory: This directory will be mounted into the container to persist the SQLite database.mkdir -p data
- Configure
monitors.yaml
,actions.yaml
, andapp.yaml
: Edit the files in theconfigs/
or other specified directory to define your monitors and actions.
Commands
- Start the service:
docker compose up -d
- View logs:
docker compose logs -f
- Stop the service:
docker compose down
Running with docker run
(Manual)
If you prefer not to use Docker Compose, you can run the application using a docker run
command. This is more verbose but offers the same functionality.
docker run --rm -d \
--name argus_app \
--env-file .env \
-v "$(pwd)/configs:/app/configs:ro" \
-v "$(pwd)/abis:/app/abis:ro" \
-v "$(pwd)/data:/app/data" \
ghcr.io/isserge/argus-rs:latest run --config-dir /app/configs
Explanation of flags:
--rm
: Automatically remove the container when it exits.-d
: Run in detached mode (in the background).--name argus_app
: Assign a name to the container.--env-file .env
: Load environment variables from the.env
file.-v "$(pwd)/...:/app/..."
: Mount local directories for configuration and data persistence. The:ro
flag makes theconfigs
andabis
directories read-only inside the container.ghcr.io/isserge/argus-rs:latest
: The Docker image to use.run --config-dir /app/configs
: The command to execute inside the container.
Command-Line Interface (CLI)
Argus is primarily a long-running service, but it also provides a command-line interface for common operations, such as running the main service, testing monitors, and managing the database.
Main Commands
You can see the available commands by running cargo run -- --help
.
Usage: argus <COMMAND>
Commands:
run Starts the main monitoring service
dry-run Runs a dry run of the monitors against a range of historical blocks
help Print this message or the help of the given subcommand(s)
run
This is the main command to start the Argus monitoring service.
cargo run --release -- run
This command will:
- Load the configuration from the
configs/
directory. - Connect to the database and apply any pending migrations.
- Connect to the configured RPC endpoints.
- Start polling for new blocks and processing them against your monitors.
Options:
-
--config-dir <PATH>
: Specifies a custom directory to load configuration files from.cargo run --release -- run --config-dir /path/to/my/configs
dry-run
The dry-run
command is an essential tool for testing and validating your monitor configurations and Rhai filter scripts against historical blockchain data. It allows you to simulate the monitoring process over a specified range of blocks without affecting the live service or making persistent database changes.
How it Works:
- One-Shot Execution: The command initializes all necessary application services (data source, block processor, filtering engine, etc.) in a temporary, one-shot mode.
- In-Memory Database: It uses a temporary, in-memory SQLite database for state management, ensuring that no persistent changes are made to your actual database.
- Block Processing: It fetches and processes blocks in batches (defaulting to 50 blocks per batch) within the specified
--from
and--to
range. - Script Evaluation: For each transaction and log in the processed blocks, it evaluates your monitor's
filter_script
. - Real Notifications (Test Mode): Any matches found will trigger real notifications to your configured actions. During development, it's highly recommended to configure your actions to point to test endpoints (e.g., Webhook.site) to avoid sending unwanted alerts.
- Summary Report: After processing the entire block range, the command prints a human-readable summary report of all detected matches to standard output. This provides a clear overview of the results, including total blocks processed, total matches, and breakdowns by monitor and action.
Usage:
cargo run --release -- dry-run --from <START_BLOCK> --to <END_BLOCK> [--config-dir <PATH>]
Arguments:
--from <BLOCK>
: The starting block number for the dry run (inclusive).--to <BLOCK>
: The ending block number for the dry run (inclusive).
Options:
--config-dir <PATH>
: (Optional) Specifies a custom directory to load configuration files from. Defaults toconfigs/
.
Example:
To test your monitors against blocks 15,000,000 to 15,000,100 on the network defined in your app.yaml
:
cargo run --release -- dry-run --from 15000000 --to 15000100
This will produce a report similar to the following:
Dry Run Report
==============
Summary
-------
- Blocks Processed: 15000000 to 15000100 (101 blocks)
- Total Matches Found: 27
Matches by Monitor
------------------
- "Large USDC Transfers": 15
- "Admin Function Calls": 12
Notifications Dispatched
------------------------
- "slack-critical": 15
- "stdout-verbose": 12
For a practical example of using dry-run
to test a monitor, refer to the Basic ETH Transfer Monitor example.
Database Migrations
Database migrations are handled by sqlx-cli
. This is not a direct subcommand of argus
but is a critical part of the operational workflow.
Before running the application for the first time, or after any update that includes database changes, you must run the migrations:
sqlx migrate run
REST API
Argus includes a built-in REST API server for system introspection. This API provides a way to observe the state of the running application, such as its health and the configuration of its active monitors.
Enabling the API
For security, the API server is disabled by default. To enable it, you must configure the server
section in your app.yaml
.
# in configs/app.yaml
server:
# Set to true to enable the API server.
enabled: true
# (Optional) The address and port for the server to listen on.
listen_address: "0.0.0.0:8080"
Once enabled, the API endpoints will be available at the specified listen_address
.
API Endpoints
Health Check
-
GET /health
Provides a simple health check of the API server.
Success Response (
200 OK
){ "status": "ok" }
Example Usage:
curl http://localhost:8080/health
Application Status
-
GET /status
Retrieves the current status and metrics of the application.
Success Response (
200 OK
){ "version": "0.1.0", "network_id": "ethereum", "uptime_secs": 3600, "latest_processed_block": 18345678, "latest_processed_block_timestamp_secs": 1698382800 }
Example Usage:
curl http://localhost:8080/status
List All Monitors
-
GET /monitors
Retrieves a list of all monitors currently loaded and active in the application for the configured network.
Success Response (
200 OK
){ "monitors": [ { "id": 1, "name": "Large ETH Transfers", "network": "ethereum", "address": null, "abi": null, "filter_script": "tx.value > ether(10)", "actions": [ "my-webhook" ], "created_at": "2023-10-27T10:00:00Z", "updated_at": "2023-10-27T10:00:00Z" }, { "id": 2, "name": "Large USDC Transfers", "network": "ethereum", "address": "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48", "abi": "usdc", "filter_script": "log.name == \"Transfer\" && log.params.value > usdc(1000000)", "actions": [ "slack-notifications" ], "created_at": "2023-10-27T10:00:00Z", "updated_at": "2023-10-27T10:00:00Z" } ] }
Example Usage:
curl http://localhost:8080/monitors
Get a Specific Monitor
-
GET /monitors/{id}
Retrieves the full configuration of a single monitor by its unique ID.
URL Parameters:
id
(integer, required): The unique ID of the monitor.
Success Response (
200 OK
){ "monitor": { "id": 1, "name": "Large ETH Transfers", "network": "ethereum", "address": null, "abi": null, "filter_script": "tx.value > ether(10)", "actions": [ "my-webhook" ], "created_at": "2023-10-27T10:00:00Z", "updated_at": "2023-10-27T10:00:00Z" } }
Error Response (
404 Not Found
) If no monitor with the specified ID exists.{ "error": "Monitor not found" }
Example Usage:
curl http://localhost:8080/monitors/1
List All Actions
-
GET /actions
Retrieves a list of all actions currently loaded and active in the application.
Success Response (
200 OK
){ "actions": [ { "id": 1, "name": "my-webhook", "webhook": { "url": "https://webhook.site/your-unique-url", "method": "POST", "headers": { "Content-Type": "application/json" }, "message": { "title": "Large ETH Transfer Detected", "body": "- **Amount**: {{ tx.value | ether }} ETH\n- **From**: `{{ tx.from }}`\n- **To**: `{{ tx.to }}`\n- **Tx Hash**: `{{ transaction_hash }}`" } }, }, { "id": 2, "name": "slack-notifications", "slack": { "slack_url": "https://hooks.slack.com/services/T0000/B0000/XXXXXXXX", "message": { "title": "Large USDC Transfer Detected", "body": "A transfer of over 1,000,000 USDC was detected.\n<https://etherscan.io/tx/{{ transaction_hash }}|View on Etherscan>" } }, "policy": { "throttle": { "max_count": 5, "time_window_secs": 60 } } } ] }
Example Usage:
curl http://localhost:8080/actions
Get a Specific Action
-
GET /actions/{id}
Retrieves the full configuration of a single action by its unique ID.
URL Parameters:
id
(integer, required): The unique ID of the action.
Success Response (
200 OK
){ "action": { "id": 1, "name": "my-webhook", "webhook": { "url": "https://webhook.site/your-unique-url", "method": "POST", "headers": { "Content-Type": "application/json" }, "message": { "title": "Large ETH Transfer Detected", "body": "- **Amount**: {{ tx.value | ether }} ETH\n- **From**: `{{ tx.from }}`\n- **To**: `{{ tx.to }}`\n- **Tx Hash**: `{{ transaction_hash }}`" } }, } }
Error Response (
404 Not Found
) If no action with the specified ID exists.{ "error": "Action not found" }
Example Usage:
curl http://localhost:8080/actions/1
Architecture
This document provides a high-level overview of the internal architecture of the Argus application.
Core Principles
- Modular: Each component has a distinct and well-defined responsibility.
- Asynchronous: Built on top of Tokio, the application is designed to be highly concurrent and non-blocking.
- Stateful: The application's progress is persisted to a local database, allowing for resilience and crash recovery.
- Decoupled: Components communicate through channels, reducing tight coupling and improving maintainability.
Key Components
The src
directory is organized into several modules, each representing a key component of the system.
-
supervisor
: The top-level orchestrator. It is responsible for initializing all other components, wiring them together via channels, and managing the graceful shutdown of the application. -
monitor
: This module, centered around theMonitorManager
, is responsible for the lifecycle of monitor configurations. It loads, validates, and analyzes the monitors, preparing them for the filtering engine. It handles dynamic updates to the monitor set. -
providers
: This component is responsible for fetching block data from the external EVM RPC nodes. It handles connection management, retries, and polling for new blocks. -
engine
: This is the core data processing pipeline. It is divided into two main stages:BlockProcessor
: Receives raw block data from the providers and correlates transactions with their corresponding logs and receipts into a structured format.FilteringEngine
: Receives correlated block data from theBlockProcessor
. It executes the appropriate Rhai filter scripts for each monitor, lazily decoding event logs and transaction calldata as needed during script execution. Upon a match, it creates aMonitorMatch
object.
-
notification
: This component is divided into two main parts:AlertManager
: ReceivesMonitorMatch
es from theFilteringEngine
. It is responsible for managing notification policies (throttling, aggregation) before handing off notifications for dispatch.NotificationService
: Receives notification requests from theAlertManager
. It manages a collection of specific action clients (e.g., Webhook, Stdout) and is responsible for the final dispatch of the alert to the external service.
-
persistence
: This module provides an abstraction layer over the database (currently SQLite). It handles all state management, such as storing the last processed block number. -
config
&loader
: These modules manage the loading, parsing, and validation of the application's configuration from the YAML files. -
models
: Defines the core data structures used throughout the application (e.g.,BlockData
,Transaction
,Log
,Monitor
,Action
). -
http_server
: Provides a REST API for system introspection and dynamic configuration. It is managed by thesupervisor
and shares access to the application's state. -
http_client
: Provides a robust and reusable HTTP client with built-in retry logic, used by the notification component to send alerts. -
main.rs
: The application's entry point. It handles command-line argument parsing and kicks off the supervisor.