The past few days have brought interesting developments in small language models that could expand mobile computing and low-resource environment applications.
Here’s what caught my attention:
β’ Microsoft’s Phi was made fully open source (MIT license) and has been improved by Unsloth AI. ππ Blog: https://unsloth.ai/blog/phi4
β’ Kyutai Labs based in Paris π«π· introduced Helium-1 Preview, a 2B-parameter multilingual base LLM designed for edge and mobile devices.
Model: https://huggingface.co/kyutai/helium-1-preview-2b
Blog: https://kyutai.org/2025/01/13/helium.html
β’ OpenBMB from China π¨π³, released MiniCPM-o 2.6, an 8B-parameter multimodal model that matches the capabilities of several larger models. Model: https://huggingface.co/openbmb/MiniCPM-o-2_6
β’ Moondream2 added gaze π detection functionality with intestesting application for human-computer interaction and market research applications.
Blog: https://moondream.ai/blog/announcing-gaze-detection
β’ OuteTTS, a series of small Text-To-Speech model variants expanded to support 6 languages and punctuation for more natural sounding speech synthesis. π£οΈ
Model: https://huggingface.co/OuteAI/OuteTTS-0.3-1B
These developments suggest continued progress in making language models more efficient and accessible and we’re likely to see more of this in 2025.
Note: Views on this post are my own opinion.