trying gemma3:1b on my laptop with llama cpp - i’ve heard it’s quicker than ollama?