Gemma 4 Guides
Does Unsloth Support Gemma 4? Local Run and Fine-Tuning Status

If you are searching for Unsloth Gemma 4 support, the short answer is yes.
As of April 7, 2026, Unsloth's official Gemma 4 docs say you can:
- run Gemma 4 locally in Unsloth Studio
- load GGUFs and safetensor models
- fine-tune Gemma 4 with LoRA / QLoRA
- work with vision, audio, and RL workflows
So the question is not whether Unsloth supports Gemma 4. The better question is which Gemma 4 models and which training paths make sense for your hardware.
Does Unsloth support Gemma 4? Short answer
Yes.
Unsloth's official Gemma 4 pages explicitly state that you can:
- run Gemma 4 locally on Mac, Linux, WSL, and Windows
- use Unsloth Studio for local model use
- fine-tune Gemma 4 with dedicated training docs
- export trained results to GGUF for local runtimes
That is real support, not an accidental workaround.
Which Gemma 4 models are supported in Unsloth?
Unsloth's Gemma 4 docs cover the full family:
- E2B
- E4B
- 26B A4B
- 31B
The docs also separate the family in a helpful way:
- E2B and E4B are the small multimodal models with audio support
- 26B A4B and 31B are the bigger local and training targets
So Unsloth Gemma 4 support is not just a single-model story.
What support means in practice
For practical users, Unsloth currently gives Gemma 4 support in three ways:
1. Local run support
Unsloth Studio can run GGUFs and safetensor models locally.
2. Fine-tuning support
Unsloth has official Gemma 4 training docs and notebooks for LoRA / QLoRA workflows.
3. Export support
You can export adapters or outputs for downstream local runtimes such as llama.cpp, Ollama, and LM Studio.
That makes Unsloth especially useful if you are not just trying to chat, but to build a real tuning workflow.
Important caveats for larger Gemma 4 models
This is where support becomes more nuanced.
Unsloth's official Gemma 4 training docs note that:
- 26B A4B and 31B Colab notebooks need A100-class hardware
- for the 26B A4B MoE, LoRA is supported, but Unsloth recommends 16-bit / BF16 LoRA if memory allows
- for the 26B A4B, it is smart to start with shorter context lengths before scaling up
That means Unsloth supports Gemma 4, but not every Gemma 4 workflow is equally cheap or equally easy.
Which Gemma 4 model should you use with Unsloth?
For most people:
- choose E4B if you are validating workflow fit or want a smaller multimodal fine-tuning target
- choose E2B only when the hardware budget is extremely tight
- choose 26B A4B if you want the strongest practical local training target
- choose 31B only if you know you need the best quality and can afford the memory
That makes E4B and 26B A4B the most common starting points for Unsloth users.
When Unsloth is the right path
Choose Unsloth if you want:
- fine-tuning, not just inference
- local workflow experimentation
- GGUF export after training
- faster iteration than a heavier research stack
Choose LM Studio instead if you just want a GUI for inference.
Choose llama.cpp instead if you just want a lean inference server or command-line runtime.
FAQ
Does Unsloth support Gemma 4 today?
Yes. Unsloth's official April 2026 docs cover local runs and fine-tuning for the Gemma 4 family.
Which Gemma 4 models work with Unsloth?
E2B, E4B, 26B A4B, and 31B are all covered.
Can Unsloth fine-tune Gemma 4 26B and 31B?
Yes, but the official docs make clear that larger-model notebooks move into A100-class territory.
Which Gemma 4 model should I start with in Unsloth?
Usually E4B for lighter workflows, or 26B A4B for the stronger practical path.
Official references
Related guides
Related guides
Continue through the Gemma 4 cluster with the next guide that matches your current decision.

Does llama.cpp Support Gemma 4? GGUF Status, Fixes, and What Works
A practical answer to whether llama.cpp supports Gemma 4, with the official GGUF links, current support status, and what 'supported' really means.

Does LM Studio Support Gemma 4? Compatibility, Model List, and Requirements
A clear answer to whether LM Studio supports Gemma 4, with the supported model list, minimum memory, and practical setup expectations.

How to Fine-Tune Gemma 4 with Unsloth: Step-by-Step Guide
Use this step-by-step guide to fine-tune Gemma 4 with Unsloth, choose the right model for your hardware, and export the result for Ollama, llama.cpp, or LM Studio.
Still deciding what to read next?
Go back to the guide hub to browse model comparisons, setup walkthroughs, and hardware planning pages.
