海量在线大模型 兼容OpenAI API

Nous: Hermes 2 Mixtral 8x7B DPO

$0.0022/1k
$0.0022/1k
开始对话
nousresearch/nous-hermes-2-mixtral-8x7b-dpo
上下文长度: 32,768 text->text Mistral 2024-01-16 更新
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM. The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. moe

模型参数

架构信息

模态: text->text
Tokenizer: Mistral
指令类型: chatml

限制信息

上下文长度: 32,768