[[34m2024-10-14 08:33:15[0m] Experiment directory created at results/011-GPT-B
[[34m2024-10-14 08:33:15[0m] Experiment directory created in cloud at ./output/2024-10-14-08-33-15/011-GPT-B/checkpoints
[[34m2024-10-14 08:33:15[0m] Namespace(data_path='/data0/data/imagenet', cloud_save_path='./output', no_local_save=False, gpt_model='GPT-B', gpt_ckpt=None, gpt_type='c2i', vocab_size=16384, ema=False, cls_token_num=1, dropout_p=0.1, token_dropout_p=0.1, drop_path_rate=0.0, no_compile=True, results_dir='results', dataset='imagenet_code', image_size=256, downsample_size=16, num_classes=1000, epochs=300, lr=0.0001, weight_decay=0.05, beta1=0.9, beta2=0.95, max_grad_norm=1.0, global_batch_size=64, global_seed=0, num_workers=24, log_every=100, ckpt_every=5000, gradient_accumulation_steps=1, mixed_precision='bf16', vq_model='VQ-16', vq_ckpt='./vq_ds16_c2i.pt', codebook_size=16384, codebook_embed_dim=8, max_steps=100000, rank=0, world_size=8, gpu=0, dist_url='env://', distributed=True, dist_backend='nccl')
[[34m2024-10-14 08:33:15[0m] Starting rank=0, seed=0, world_size=8.
[[34m2024-10-14 08:33:18[0m] GPT Parameters: 110,888,448
[[34m2024-10-14 08:33:18[0m] num decayed parameter tensors: 63, with 110,869,248 parameters
[[34m2024-10-14 08:33:18[0m] num non-decayed parameter tensors: 25, with 19,200 parameters
[[34m2024-10-14 08:33:18[0m] using fused AdamW: True
[[34m2024-10-14 08:33:21[0m] Dataset contains 1,331,167 images (/data0/data/imagenet)
[[34m2024-10-14 08:33:21[0m] Total # samples to consume: 6,400,000 (4.81 epochs)
[[34m2024-10-14 08:33:21[0m] Training for 300 epochs...
[[34m2024-10-14 08:33:21[0m] Beginning epoch 0...
[[34m2024-10-14 08:33:30[0m] Reducer buckets have been rebuilt in this iteration.
[[34m2024-10-14 08:34:32[0m] (step=0000100) Train Loss: 9.6467, Train Steps/Sec: 1.41
[[34m2024-10-14 08:35:35[0m] (step=0000200) Train Loss: 9.5238, Train Steps/Sec: 1.58
