Gemma 4 was just released recently, so I tried installing a local LLM for the first time. I chose Ollama as the LLM management tool because it seems to work well with my coding tools.
Ollama Gemma 4 PageI used the DMG file for installing Ollama on my Mac, as that was the recommended method.
"The preferred method of installation is to mount the ollama.dmg and drag-and-drop the Ollama application to the system-wide Applications folder."
After installing Ollama, check version on terminal.
myuser@my-Mac-mini ~ % ollama -v
ollama version is 0.20.3
Ollama for Mac
Gemma4:26B is a workstation model and mixture of experts model with 4B active parameters. I thought 32GB of memory would be too small, but it seems that 26B is the smallest option available for the Gemma4 workstation.
myuser@my-Mac-mini ~ % ollama run gemma4:26b
pulling manifest
pulling 7121486771cb: 9% ▕█ ▏ 1.6 GB/ 17 GB 63 MB/s 4m16s
Installation failed on Wi-Fi due to a 'read operation timed out' error. It succeeded after switching to a wired connection.