31 - 60 of 12,766,729 available results.
model

Granite Docling is a multimodal model for efficient document conversion.

7m

10K+

2

model

Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging

2m

10K+

1

model

Safety reasoning models for policy-based text classification and foundational safety tasks.

6m

10K+

2

model

Embedding Gemma is a state-of-the-art text embedding model from Google DeepMind

8m

10K+

3

model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

3m

10K+

1

model

Nomic Embed Text v1 is an open‑source, fully auditable text embedding model

9m

10K+

4

model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

5m

10K+

1

model

Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.

3m

10K+

4

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

5m

10K+

3

model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

6m

10K+

1

model

Mistral fine-tuned via NVIDIA NeMo for smoother enterprise use

1y

10K+

7

model

Multimodal AI model with 35B MoE architecture for coding agents, reasoning, and vision tasks

2d

10K+

model

397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages

26d

10K+

1

model

24B multimodal instruction model by Mistral AI, tuned for accuracy, tool use & fewer repeats

7m

10K+

1

model

Designed for reasoning, agent and general capabilities, and versatile developer-friendly features

8m

10K+

2

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

5m

10K+

model

SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices.

7m

10K+

3

image

IBM's Granite 3.0 large language model (LLM), optimized for local large language model operations

1y

10K+

1

model

mxbai-embed-large-v1 is a top English embed model by Mixedbread AI, great for RAG and more.

1y

10K+

3

model

SmolVLM: lightweight multimodal model for video, image, and text analysis, optimized for devices

6m

9.7K

model

744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)

2m

9.5K

3

model

Agentic coding LLM (24B) fine-tuned from Mistral-Small-3.1 with a 128K context window

7m

9.5K

4

model

Granite-4.0-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.

6m

9.4K

model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

4m

8.3K

2

model

7B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.

7m

7.4K

3

model

Experimental Qwen variant—lean, fast, and a bit mysterious

12m

7.0K

3

model

An open-source visual language model that interprets images via text prompts, fast and powerful.

7m

6.9K

2

image

4-bit quantized version of model Granite-7b-lab

1y

6.4K

5

model

Granite Embedding Multilingual is a 278 million parameter, encoder‑only XLM‑RoBERTa‑style

9m

6.3K

2

model

32B long-context instruct model with RL alignment, IF, tool use, and enterprise optimization.

7m

5.9K

1