The past few days have brought interesting developments in small language models that could expand mobile computing and low-resource environment applications.
Here’s what caught my attention:
• Microsoft’s Phi was made fully open source (MIT license) and has been improved by Unsloth AI. 🚀🔓 Blog: https://unsloth.ai/blog/phi4
• Kyutai Labs based in Paris 🇫🇷 introduced Helium-1 Preview, a 2B-parameter multilingual base LLM designed for edge and mobile devices.
Model: https://huggingface.co/kyutai/helium-1-preview-2b
Blog: https://kyutai.org/2025/01/13/helium.html
• OpenBMB from China 🇨🇳, released MiniCPM-o 2.6, an 8B-parameter multimodal model that matches the capabilities of several larger models. Model: https://huggingface.co/openbmb/MiniCPM-o-2_6
• Moondream2 added gaze 👀 detection functionality with intestesting application for human-computer interaction and market research applications.
Blog: https://moondream.ai/blog/announcing-gaze-detection
• OuteTTS, a series of small Text-To-Speech model variants expanded to support 6 languages and punctuation for more natural sounding speech synthesis. 🗣️
Model: https://huggingface.co/OuteAI/OuteTTS-0.3-1B
These developments suggest continued progress in making language models more efficient and accessible and we’re likely to see more of this in 2025.
Note: Views on this post are my own opinion.