PANews, December 22 - According to Google’s official blog, Google has released FunctionGemma, which fine-tunes Gemma 3 270M specifically for function calling models, targeting local/offline agent scenarios. Its features include unified chat and tool execution, support for customized fine-tuning (Mobile Actions accuracy improved from 58% to 85%), miniaturization for edge devices (such as NVIDIA Jetson Nano and mobile phones), and optimization for JSON/multilingual input. The official release provides download links for Hugging Face and Kaggle, along with deployment guidelines for Transformers, Unsloth, Keras, NeMo fine-tuning, and LiteRT-LM, vLLM, MLX, Llama.cpp, Ollama, Vertex AI, and LM Studio. Additionally, multiple demos and datasets/Colab have been launched on Edge Gallery.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Google launches FunctionGemma 270M edge AI model
PANews, December 22 - According to Google’s official blog, Google has released FunctionGemma, which fine-tunes Gemma 3 270M specifically for function calling models, targeting local/offline agent scenarios. Its features include unified chat and tool execution, support for customized fine-tuning (Mobile Actions accuracy improved from 58% to 85%), miniaturization for edge devices (such as NVIDIA Jetson Nano and mobile phones), and optimization for JSON/multilingual input. The official release provides download links for Hugging Face and Kaggle, along with deployment guidelines for Transformers, Unsloth, Keras, NeMo fine-tuning, and LiteRT-LM, vLLM, MLX, Llama.cpp, Ollama, Vertex AI, and LM Studio. Additionally, multiple demos and datasets/Colab have been launched on Edge Gallery.