Back to feed

b7787

Jan 21, 2026
Meta/llama.cppCLIvb7787

gguf: display strerrno when cant load a model (#18884)

I've had issues loading models with llama-server: [44039] E gguf_init_from_file: failed to open GGUF file 'mistral-7b-v0.1.Q8_0.gguf'

and I was sure it could access the file. Seems like --models-dir and --models-presets dont interact like I thought they would but I salvaged this snippet that helps troubleshooting [44039] E gguf_init_from_file: failed to open GGUF file 'mistral-7b-v0.1.Q8_0.gguf' (errno No such file or directory)

macOS/iOS:

Linux:

Windows:

openEuler: