Liquid AI made headlines with the release of Liquid Edge-AI Platform (LEAP) v0, revolutionizing the way developers deploy AI on local devices like smartphones, laptops, and cars by eliminating the need for cloud infrastructure. In addition to LEAP, the company introduced Apollo, an iOS-native app that provides private, secure, and low-latency AI interactions directly on devices. This innovative platform simplifies edge AI deployment with its small language model (SLM) library and platform-agnostic tools, requiring only 10 lines of code for integration.
A few weeks back, Liquid AI introduced Liquid Foundation Models (LFM2), a series of generative AI models that are faster, more energy-efficient, and optimized for resource-constrained scenarios compared to traditional transformer-based models. According to Ramin Hasani, the co-founder and CEO of Liquid AI, developers have expressed frustration with the complexity, feasibility, and privacy trade-offs of current edge AI solutions. LEAP aims to address these challenges by providing a deployment platform that is powerful, efficient, and private, designed to make edge AI accessible and user-friendly. The iOS-native app Apollo allows users to experience the new groundbreaking models firsthand.
Apollo and LEAP integrate LFM2, enabling developers to create high-performance, edge-native AI applications. The platform is tailored to cater to both AI novices and experienced developers, emphasizing ease of use and efficiency. LEAP is currently available, and Apollo can be downloaded from the iOS App Store, with an Android version in the works.
AI/ML | edge AI | generative AI | Liquid AI | SLM