海量在线大模型 兼容OpenAI API

OpenChat 3.5 7B

$0.0002/1k
$0.0002/1k
开始对话
openchat/openchat-7b
上下文长度: 8,192 text->text Mistral 2023-11-28 更新
OpenChat 7B is a library of open-source language models, fine-tuned with “C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)” - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels. For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B. For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B. open-source

模型参数

架构信息

模态: text->text
Tokenizer: Mistral
指令类型: openchat

限制信息

上下文长度: 8,192
最大回复长度: 4,096