These are all the intermediate tensors that are created by llama.cpp in a single forward pass through Qwen3-0.6B.
ref: https://github.com/ddh0/llama.cpp/tree/save-tensor-npy-feature
./build/bin/llama-eval-callback -m ~/gguf/Qwen3-0.6B-f32.gguf -p "The quick brown fox" -n 1 -v
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support
