Granite Docling is a multimodal model for efficient document conversion.
7m
10K+
2
Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging
2m
10K+
1
Safety reasoning models for policy-based text classification and foundational safety tasks.
6m
10K+
2
Embedding Gemma is a state-of-the-art text embedding model from Google DeepMind
8m
10K+
3
GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.
3m
10K+
1
Nomic Embed Text v1 is an open‑source, fully auditable text embedding model
9m
10K+
4
Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.
5m
10K+
1
Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.
3m
10K+
4
Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.
5m
10K+
3
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks
6m
10K+
1
Multimodal AI model with 35B MoE architecture for coding agents, reasoning, and vision tasks
2d
10K+
397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages
26d
10K+
1
24B multimodal instruction model by Mistral AI, tuned for accuracy, tool use & fewer repeats
7m
10K+
1
Designed for reasoning, agent and general capabilities, and versatile developer-friendly features
8m
10K+
2
Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.
5m
10K+
SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices.
7m
10K+
3
IBM's Granite 3.0 large language model (LLM), optimized for local large language model operations
1y
10K+
1
mxbai-embed-large-v1 is a top English embed model by Mixedbread AI, great for RAG and more.
1y
10K+
3
SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices
6m
9.7K
744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)
2m
9.5K
3
Agentic coding LLM (24B) fine-tuned from Mistral-Small-3.1 with a 128K context window
7m
9.5K
4
Granite-4.0-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.
6m
9.4K
FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.
4m
8.3K
2
7B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.
7m
7.4K
3
An open-source visual language model that interprets images via text prompts, fast and powerful.
7m
6.9K
2
Granite Embedding Multilingual is a 278 million parameter, encoder‑only XLM‑RoBERTa‑style
9m
6.3K
2
32B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.
7m
5.9K
1