πŸ“ˆ Time Series Transformer Classifier for Algorithmic Trading

This repository provides a Transformer-based time series classifier trained on Gold Futures (GC=F, 1-hour timeframe) to predict short-term price direction. The model outputs Up, Flat, or Down classes which can be used to generate trading signals.


πŸ”§ Model Architecture

  • Base: Transformer Encoder with Positional Encoding
  • Head: Classification Layer
  • Framework: PyTorch

πŸ“Š Training Setup

  • Dataset: GC=F (Gold Futures), 1-hour interval, 2 years (Yahoo Finance)
  • Features: EMA, RSI, ATR, MACD, Bollinger Bands, lagged returns, realized volatility, and cyclical time features (hour/day).
  • Target: 3-class (Up, Flat, Down), horizon = 6h, threshold = 0.0005.
  • Split: Walk-forward (80% train, 20% validation).
  • Loss: CrossEntropyLoss (weighted).
  • Optimizer: AdamW

πŸ“‰ Backtest Results

The model was evaluated via a walk-forward backtest with continuous position sizing.

Raw Signal Performance

  • Final Equity: 206.24
  • Sharpe Ratio: 0.032
  • Profit Factor: 1.099
  • Winrate: 52.2%
  • Max Drawdown: -10.6%

Confidence-Filtered Signal (CONF_THR = 0.45)

  • Final Equity: 100.0
  • Sharpe Ratio: 0.0
  • Winrate: 0% (filter too strict)

πŸ“Š Backtest Equity Curve


πŸš€ Usage

from huggingface_hub import hf_hub_download
import torch, torch.nn.functional as F
import numpy as np

# Download model weights
repo_id = "JonusNattapong/transformer-classifier-gc1h"
filename = "transformer_cls_gc1h.pt"
state_dict_path = hf_hub_download(repo_id=repo_id, filename=filename)

# Define model (must match training config)
model_inf = TimeSeriesTransformerCLS(
    n_features=n_features,
    n_classes=3,
    d_model=128,
    n_heads=4,
    n_layers=4,
    d_ff=256,
    dropout=0.1
).to(device)

# Load weights
model_inf.load_state_dict(torch.load(state_dict_path, map_location=device))
model_inf.eval()

# Example Inference
example_input = scaled.iloc[-WINDOW:].values.astype(np.float32)  # shape [T, F]
example_input_tensor = torch.tensor(example_input).unsqueeze(0).to(device)

with torch.no_grad():
    logits = model_inf(example_input_tensor)
    probabilities = F.softmax(logits, dim=1).squeeze(0).cpu().numpy()
    predicted_class = int(torch.argmax(logits, dim=1).item())

print("Probabilities:", probabilities)
print("Predicted Class:", predicted_class)  # 0=Down, 1=Flat, 2=Up
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Evaluation results