海量在线大模型 兼容OpenAI API

全部大模型

219个模型 · 2025-01-10 更新
OpenHermes 2.5 Mistral 7B
$0.0007/1k
$0.0007/1k
teknium/openhermes-2.5-mistral-7b
A continuation of OpenHermes 2 model, trained on additional code datasets. Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.
2023-11-20 4,096 text->text Mistral
openchat/openchat-7b:free
OpenChat 7B is a library of open-source language models, fine-tuned with “C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)” - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels. For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B. For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B. open-source
2023-11-28 8,192 text->text Mistral
OpenChat 3.5 7B
$0.0002/1k
$0.0002/1k
openchat/openchat-7b
OpenChat 7B is a library of open-source language models, fine-tuned with “C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)” - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels. For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B. For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B. open-source
2023-11-28 8,192 text->text Mistral
Nous: Hermes 2 Mixtral 8x7B DPO
$0.0022/1k
$0.0022/1k
nousresearch/nous-hermes-2-mixtral-8x7b-dpo
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM. The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. moe
2024-01-16 32,768 text->text Mistral
Mistral: Pixtral Large 2411
$0.0080/1k
$0.024/1k
mistralai/pixtral-large-2411
Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of Mistral Large 2. The model is able to understand documents, charts and natural images. The model is available under the Mistral Research License (MRL) for research and educational use, and the Mistral Commercial License for experimentation, testing, and production for commercial purposes.
2024-11-19 128,000 text+image->text Mistral
Mistral: Pixtral 12B
$0.0004/1k
$0.0004/1k
mistralai/pixtral-12b
The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836.
2024-09-10 4,096 text+image->text Mistral
mistralai/mixtral-8x7b-instruct:nitro
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
2023-12-10 32,768 text->text Mistral
Mistral: Mixtral 8x7B Instruct
$0.0010/1k
$0.0010/1k
mistralai/mixtral-8x7b-instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
2023-12-10 32,768 text->text Mistral
Mistral: Mixtral 8x7B (base)
$0.0022/1k
$0.0022/1k
mistralai/mixtral-8x7b
Mixtral 8x7B is a pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructions) - see Mixtral 8x7B Instruct for an instruct-tuned model. moe
2023-12-10 32,768 text->text Mistral
Mistral: Mixtral 8x22B Instruct
$0.0036/1k
$0.0036/1k
mistralai/mixtral-8x22b-instruct
Mistral’s official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement here. moe
2024-04-17 65,536 text->text Mistral
Mistral: Mistral Nemo
$0.0001/1k
$0.0003/1k
mistralai/mistral-nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.
2024-07-19 131,072 text->text Mistral
mistralai/mistral-7b-instruct-v0.3
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct v0.2, with the following changes: Extended vocabulary to 32768 Supports v3 Tokenizer Supports function calling NOTE: Support for function calling depends on the provider.
2024-05-27 32,768 text->text Mistral