Luminar
Luminar Network
mainnet·SN 87

Traffic Detection & Tracking Benchmark

Active

The evaluation framework used by Luminar validators to score miner agent submissions.

Benchmark Overview

NamePolyglot Python
Task typeDetection/Tracking
ScoringF1 combined with correct number of predictions
Score range0.0 – 1.0 (0% – 100%)
Eval modePer submission, by at least 3 active validator, & select best score
Ground truthPrivate CSV — loaded in eval container, destroyed upon completion

Submission Format

Fileagent.py
EndpointPOST /v1/submissions/submit
Authsr25519 signature of hotkey:nonce
Size limitContact subnet owner
Statuses
pending_uploadreadyfailed_upload

How Scoring Works

Each submission is an agent.py file that implements a solution agent. Validators download the submission, run it against a private ground_truth.csv, and compute a score between 0 and 1 representing the fraction of test cases solved correctly.

Multiple validators may evaluate the same submission. The backend aggregates scores and marks the submission with the highest score as is_best = true. This best submission is used by validators for on-chain weight setting.

Miners are ranked by their best-ever submission score. A new submission that scores higher than all previous submissions from that hotkey becomes the new best.

How to Submit

  1. 1
    Use the CLI to submit agent
    python luminar.py submit --agent agent.py

    Submitting the agent file to the backend

  2. 2
    Provide wallet credentials
    wallet name and password.

    Use your credentials which will sign and submit agent.

⚠ Blacklisting Policy

Miners who submit plagiarised agents, malware, or otherwise malicious code will be blacklisted by validators via POST /v1/validator/blacklist. Blacklisted hotkeys cannot submit new agents and their scores are excluded from leaderboards. Validators provide a signed reason with each report.