海量在线大模型 兼容OpenAI API

全部大模型

219个模型 · 2025-01-10 更新
mistralai/mistral-7b-instruct-v0.2
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct, with the following changes: 32k context window (vs 8k context in v0.1) Rope-theta = 1e6 No Sliding-Window Attention
2023-12-28 32,768 text->text Mistral
mistralai/mistral-7b-instruct-v0.1
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
2023-09-28 4,096 text->text Mistral
mistralai/mistral-7b-instruct:nitro
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
2024-05-27 32,768 text->text Mistral
mistralai/mistral-7b-instruct:free
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
2024-05-27 8,192 text->text Mistral
Mistral: Mistral 7B Instruct
$0.0001/1k
$0.0002/1k
mistralai/mistral-7b-instruct
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
2024-05-27 32,768 text->text Mistral
Mistral: Ministral 8B
$0.0004/1k
$0.0004/1k
mistralai/ministral-8b
Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k context length and excels in knowledge and reasoning tasks. It outperforms peers in the sub-10B category, making it perfect for low-latency, privacy-first applications.
2024-10-17 128,000 text->text Mistral
Mistral: Ministral 3B
$0.0002/1k
$0.0002/1k
mistralai/ministral-3b
Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.
2024-10-17 128,000 text->text Mistral
Mistral: Codestral Mamba
$0.0010/1k
$0.0010/1k
mistralai/codestral-mamba
A 7.3B parameter Mamba-based model designed for code and reasoning tasks. Linear time inference, allowing for theoretically infinite sequence lengths 256k token context window Optimized for quick responses, especially beneficial for code productivity Performs comparably to state-of-the-art transformer models in code and reasoning tasks Available under the Apache 2.0 license for free use, modification, and distribution
2024-07-19 256,000 text->text Mistral
Mistral Tiny
$0.0010/1k
$0.0010/1k
mistralai/mistral-tiny
This model is currently powered by Mistral-7B-v0.2, and incorporates a “better” fine-tuning than Mistral 7B, inspired by community work. It’s best used for large batch processing tasks where cost is a significant factor but reasoning capabilities are not crucial.
2024-01-10 32,000 text->text Mistral
Mistral Small
$0.0008/1k
$0.0024/1k
mistralai/mistral-small
With 22 billion parameters, Mistral Small v24.09 offers a convenient mid-point between (Mistral NeMo 12B)[/mistralai/mistral-nemo] and (Mistral Large 2)[/mistralai/mistral-large], providing a cost-effective solution that can be deployed across various platforms and environments. It has better reasoning, exhibits more capabilities, can produce and reason about code, and is multiligual, supporting English, French, German, Italian, and Spanish.
2024-01-10 32,000 text->text Mistral
Mistral Nemo 12B Celeste
$0.0032/1k
$0.0048/1k
nothingiisreal/mn-celeste-12b
A specialized story writing and roleplaying model based on Mistral’s NeMo 12B Instruct. Fine-tuned on curated datasets including Reddit Writing Prompts and Opus Instruct 25K. This model excels at creative writing, offering improved NSFW capabilities, with smarter and more active narration. It demonstrates remarkable versatility in both SFW and NSFW scenarios, with strong Out of Character (OOC) steering capabilities, allowing fine-tuned control over narrative direction and character behavior. Check out the model’s HuggingFace page for details on what parameters and prompts work best!
2024-08-02 16,384 text->text Mistral
Mistral Medium
$0.011/1k
$0.032/1k
mistralai/mistral-medium
This is Mistral AI’s closed-source, medium-sided model. It’s powered by a closed-source prototype and excels at reasoning, code, JSON, chat, and more. In benchmarks, it compares with many of the flagship models of other companies.
2024-01-10 32,000 text->text Mistral