Imagine a world where your devices not only see but truly understand what they’re looking at—whether it’s reading a document, tracking where someone’s gaze lands, or answering questions about a video.
Deepseek VL-2 is a sophisticated vision-language model designed to address complex multimodal tasks with remarkable efficiency and precision. Built on a new mixture of experts (MoE) architecture, this ...
Sarvam Vision has reported an accuracy score of 84.3% on olmOCR-Bench, an open-source OCR evaluation framework. Its creators ...
Hugging Face Inc. today open-sourced SmolVLM-256M, a new vision language model with the lowest parameter count in its category. The algorithm’s small footprint allows it to run on devices such as ...
Foundation models have made great advances in robotics, enabling the creation of vision-language-action (VLA) models that generalize to objects, scenes, and tasks beyond their training data. However, ...
DeepSeek, the fast-growing Chinese AI company, is shaking up global technology yet again. Just as the rapid rise of the company's frontier AI models triggered a selloff of U.S. artificial intelligence ...
Sarvam AI's new Document intelligence model surpasses the actuary of Gemini 3 Pro, GPT 5.2, and other AI models when it comes ...
A research team has developed a chest X-ray vision-language foundation model, MaCo, reducing the dependency on annotations while improving both clinical efficiency and diagnostic accuracy. The study ...
As I highlighted in my last article, two decades after the DARPA Grand Challenge, the autonomous vehicle (AV) industry is still waiting for breakthroughs—particularly in addressing the “long tail ...
Indian start-up Sarvam AI's sovereign Large Language Model (LLM) challenges larger global models in multilingual tasks, particularly in Indic languages.