Obscura Blitz v0.0.4

Model Description

Obscura Blitz is a specialized fine-tuned language model designed for cryptocurrency trading signal generation and decision-making. Built on top of Qwen2.5-1.5B-Instruct, this model has been specifically trained to analyze cryptocurrency market data and provide structured trading recommendations.

Key Features

  • Cryptocurrency Trading Signals: Generates BUY/SELL/HOLD/MONITOR signals for crypto tokens
  • Risk Assessment: Provides LOW/MEDIUM/HIGH risk classifications
  • Confidence Scoring: Outputs confidence levels (0.0-1.0) for each recommendation
  • Structured JSON Output: Returns well-formatted, parseable trading recommendations
  • Multi-Token Analysis: Analyzes up to 10 tokens simultaneously with comprehensive reasoning

Model Details

Base Model

  • Architecture: Qwen3-4B-Instruct-2507
  • Parameters: 4B
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Framework: LLaMA-Factory

Training Data

  • Dataset Size: 14,079 samples
  • Signal Generation: 12,365 samples (87.8%)
  • Trading Decisions: 1,714 samples (12.2%)
  • Data Source: Real cryptocurrency market data from Ethereum blockchain
  • Time Period: August 2025 (blocks 23125524-23125623)

Performance Metrics

Based on training results:

  • Final Training Loss: 0.049
  • Final Validation Loss: 0.0464
  • Training Convergence: Stable convergence over 3 epochs
  • Validation Performance: Consistent improvement from 0.1044 to 0.0464

Usage

Basic Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load model and tokenizer
base_model_id = "Qwen/Qwen3-4B-Instruct-2507"
adapter_id = "ahmedaali/obscura-blitz-v0.0.4-qwen-3"

model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
model.load_adapter(adapter_id)

# Set padding token
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

Example Input

prompt = """Analyze these 10 cryptocurrency tokens and provide trading signals.

TOKENS: [
  {
    "symbol": "ETH",
    "signal": "MONITOR",
    "price": "$4668.10",
    "change_24h": "6.3%",
    "change_7d": "29.8%"
  },
  {
    "symbol": "SOL",
    "signal": "MONITOR",
    "price": "$201.61",
    "change_24h": "13.0%",
    "change_7d": "22.9%"
  }
]

Return this EXACT JSON format for each token:
{
  "tokens": {
    "SYMBOL": {
      "signal": "BUY/SELL/HOLD/MONITOR",
      "confidence": 0.0-1.0,
      "reasoning": "Brief reason",
      "risk": "LOW/MEDIUM/HIGH"
    }
  }
}

Focus on price trends and momentum. Keep responses brief.

IMPORTANT: Return complete, valid JSON. Do not truncate."""

# Generate response
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    temperature=0.7,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Expected Output

{
  "tokens": {
    "ETH": {
      "signal": "BUY",
      "confidence": 0.8,
      "reasoning": "Strong upward momentum with significant 24h and 7d gains.",
      "risk": "MEDIUM"
    },
    "SOL": {
      "signal": "BUY",
      "confidence": 0.75,
      "reasoning": "Excellent performance with 13% daily and 22.9% weekly gains.",
      "risk": "MEDIUM"
    }
  }
}

Training Details

Dataset Structure

The model was trained on two main types of data:

  1. Signal Generation: Analysis of cryptocurrency price data to generate trading signals
  2. Trading Decisions: Portfolio allocation and trading strategy decisions

Training Configuration

  • Learning Rate: 1e-4 (0.0001)
  • Train Batch Size: 1
  • Eval Batch Size: 1
  • Total Train Batch Size: 4 (with gradient accumulation)
  • Gradient Accumulation Steps: 4
  • Epochs: 3.0
  • Warmup Ratio: 0.1
  • Optimizer: AdamW with betas=(0.9,0.999), epsilon=1e-08
  • Learning Rate Scheduler: Cosine
  • Seed: 42

Training Results

The model was trained for 3 epochs with the following progression:

Training Loss Epoch Step Validation Loss
0.1194 0.2223 100 0.1044
0.1023 0.4447 200 0.0874
0.0873 0.6670 300 0.0740
0.1051 0.8894 400 0.0672
0.0636 1.1112 500 0.0664
0.072 1.3335 600 0.0655
0.0807 1.5559 700 0.0573
0.0536 1.7782 800 0.0580
0.0453 2.0 900 0.0510
0.0469 2.2223 1000 0.0489
0.0318 2.4447 1100 0.0482
0.0527 2.6670 1200 0.0468
0.049 2.8894 1300 0.0464

Final Results:

  • Final Training Loss: 0.049
  • Final Validation Loss: 0.0464
  • Total Training Steps: 1,300

Framework Versions

  • PEFT: 0.15.2
  • Transformers: 4.55.0
  • PyTorch: 2.5.1+cu121
  • Datasets: 3.2.0
  • Tokenizers: 0.21.1

Limitations and Considerations

Known Limitations

  • Market Context: The model was trained on data from August 2025 and may not reflect current market conditions
  • Token Coverage: Limited to the specific tokens present in the training dataset
  • Risk Assessment: Risk classifications are based on historical patterns and may not account for unforeseen market events
  • Confidence Scoring: Confidence levels are relative to the training data and should be interpreted carefully

Ethical Considerations

  • Not Financial Advice: This model is for educational and research purposes only
  • Market Volatility: Cryptocurrency markets are highly volatile and unpredictable
  • Risk Management: Always conduct thorough research and consider professional financial advice
  • Regulatory Compliance: Ensure compliance with local financial regulations

Bias and Fairness

  • Data Bias: Training data may reflect historical market biases
  • Token Selection: Limited to specific cryptocurrencies in the training dataset
  • Market Conditions: Performance may vary under different market conditions

Citation

If you use this model in your research, please cite:

@misc{obscura_blitz_2025,
  title={Obscura Blitz: A Fine-tuned Language Model for Cryptocurrency Trading Signals},
  author={Ahmed Aali},
  year={2025},
  url={https://huggingface.co/ahmedaali/obscura-blitz-v0.0.4-qwen-3}
}

License

This model is released under the MIT License. See the LICENSE file for details.

Acknowledgments

  • Base Model: Qwen3-4B-Instruct-2507 by Alibaba Cloud
  • Training Framework: LLaMA-Factory
  • Dataset: Real cryptocurrency market data from Ethereum blockchain
  • Evaluation: Automated assessment using OpenAI GPT-4

Contact

For questions, issues, or contributions:

  • Author: Ahmed Ali
  • Organization: Droq AI
  • GitHub: DroqAI

Disclaimer: This model is provided for educational and research purposes only. It is not intended to provide financial advice. Cryptocurrency trading involves substantial risk and may result in the loss of your invested capital. Always conduct thorough research and consider consulting with a qualified financial advisor before making investment decisions.

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ahmedaali/obscura-blitz-v0.0.4-qwen-3

Adapter
(101)
this model