GPT_Duels / GPT_TicTacToeDuel_Readme.md
MartialTerran's picture
Update GPT_TicTacToeDuel_Readme.md
11c7a0f verified

Outcome in first run of Dueling_GPTs_Training_to_win_TicTacToe_ready_for_Hyperparamter_Tuner_V0.0.py : "O has learned to effectively counter X's opening moves"

GPT Tic-Tac-Toe Dueling Agents: A Study in Competitive Learning

This dataset contains game logs and analysis from an experiment where two AI agents, "Agent X" and "Agent O," learned to play Tic-Tac-Toe by playing exclusively against each other. The goal was to observe emergent strategies and learning dynamics in a competitive self-play environment.

Dataset Overview

The dataset primarily consists of:

  • Game Logs (game_logs/): Detailed records of each game played between Agent X and Agent O. Each log (.jsonl format) typically contains:
    • Game ID
    • Sequence of moves, including the player, board state at the time of the move, and the chosen action.
    • The final outcome of the game (X wins, O wins, or Draw).
  • Analysis Reports & Visualizations (Games_Analysed_Reports_V0.1/):
    • A summary text report (console_analysis_report_V0.1.txt) detailing statistics like win rates, performance based on starting player, average game length, and frequency of early wins.
    • PNG images of plots visualizing these trends, such as:
      • Overall outcome distributions.
      • Outcome ratios over game batches (illustrating performance changes over the training period).
      • Frequency of early wins over time.

Experimental Setup (Conceptual)

Two distinct AI agents were initialized with the objective of winning at Tic-Tac-Toe.

  • Learning Mechanism: The agents learned solely through direct competition. After each game, feedback based on the outcome (win, loss, draw) was used to update the respective agent's strategy. Winning moves were reinforced, and by implication, strategies leading to losses were discouraged.
  • Self-Play: Agents X and O played a large number of games against each other. The data generated from these games served as the primary training material.
  • No External Data: The agents did not learn from human games or pre-existing Tic-Tac-Toe datasets. Their knowledge was developed de novo from their interactions.
  • Seeding (Minimal): A very small number of predefined "good" and "bad" starting moves were provided to one agent initially to bootstrap the learning process.

Observed Learning Dynamics & Key Results

The analysis of the game logs (specifically from the run_20250603_222220 experiment detailed in the reports) revealed interesting dynamics:

  1. Emergence of a Dominant Agent:

    • Over the course of approximately 970 self-play games, Agent O demonstrated superior learning and consistently outperformed Agent X.
    • Overall, Agent O won ~68.5% of the games, Agent X won ~29.6%, and ~2.0% were draws.
  2. Starting Player Anomaly & Its Impact:

    • A significant observation was that Agent X was inadvertently configured to always make the first move in this particular experimental run.
    • Despite having the theoretical first-move advantage in every game, Agent X was still outperformed by Agent O. This highlights Agent O's ability to develop strong counter-strategies.
    • The plots tracking outcome ratios over game batches (when X started) show that while X had a more competitive win rate initially, O's performance against X's openings improved significantly as more games were played.
  3. Efficiency of Winning:

    • Agent O not only won more frequently but also achieved a substantially higher number of "early wins" (winning in 6 moves or fewer) compared to Agent X (493 for O vs. 80 for X). This suggests Agent O became adept at capitalizing on Agent X's mistakes to secure victories quickly.
    • The average game length was approximately 6.6 moves, indicating that many games did not reach a full board.
  4. Learning Progression:

    • Both agents showed signs of learning, as indicated by changes in their loss metrics during their internal update phases (not directly part of this dataset, but an inferred aspect of the training process that generated these logs).
    • However, Agent O's adaptation and strategy refinement outpaced Agent X's within this competitive environment.

Significance & Potential Use

This dataset provides a snapshot of a competitive learning process. It can be used to:

  • Study emergent strategies in a simple game like Tic-Tac-Toe.
  • Analyze how different learning agents adapt to each other over time.
  • Investigate the impact of starting conditions (like first-player advantage or imbalances in initial skill) on the learning trajectory.
  • Serve as a case study for developing more robust and adaptive AI agents that learn through self-play.

Future Work (Implied by the Experiment):

  • Rectifying the starting player assignment to allow for alternating or random starts is crucial for a more balanced evaluation and training environment.
  • Further hyperparameter tuning and potentially architectural adjustments would be needed to improve the competitiveness of the underperforming agent (Agent X in this run).

Disclaimer

The internal architecture, specific algorithms, and hyperparameter details of Agent X and Agent O are not disclosed as part of this dataset description. The focus is on the observable outcomes and dynamics of their competitive interaction.