Collection of Instruct, Base, and Japanese LFM2.5-1.2B models.
AI & ML interests
A new generation of foundation models from first principles.
Recent Activity
View all activity
LFM2 is a new generation of hybrid models, designed for on-device deployment.
End-to-end audio foundation model, designed for low latency and real-time conversations
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text • 2B • Updated • 5.1k • 163 -
LFM2.5-VL-1.6B WebGPU
🧠44In-browser vision-language inference with LFM2.5-VL-1.6B
-
LiquidAI/LFM2.5-VL-1.6B-GGUF
Image-Text-to-Text • 1B • Updated • 8.46k • 42 -
LiquidAI/LFM2.5-VL-1.6B-ONNX
Image-Text-to-Text • Updated • 1.04k • 21
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
LFM2-VL is our first series of vision-language models, designed for on-device deployment.
Collection of Instruct, Base, and Japanese LFM2.5-1.2B models.
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text • 2B • Updated • 5.1k • 163 -
LFM2.5-VL-1.6B WebGPU
🧠44In-browser vision-language inference with LFM2.5-VL-1.6B
-
LiquidAI/LFM2.5-VL-1.6B-GGUF
Image-Text-to-Text • 1B • Updated • 8.46k • 42 -
LiquidAI/LFM2.5-VL-1.6B-ONNX
Image-Text-to-Text • Updated • 1.04k • 21
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
LFM2 is a new generation of hybrid models, designed for on-device deployment.
LFM2-VL is our first series of vision-language models, designed for on-device deployment.
End-to-end audio foundation model, designed for low latency and real-time conversations