海量在线大模型 兼容OpenAI API

全部大模型

219个模型 · 2025-01-10 更新
Cohere: Command
$0.0038/1k
$0.0076/1k
cohere/command
Command is an instruction-following conversational model that performs language tasks with high quality, more reliably and with a longer context than our base generative models. Use of this model is subject to Cohere’s Acceptable Use Policy.
2024-03-14 4,096 text->text Cohere
openrouter/auto
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit Activity, or read the model attribute of the response. Your response will be priced at the same rate as the routed model. The meta-model is powered by Not Diamond. Learn more in our docs. Requests will be routed to the following models: - openai/gpt-4o-2024-08-06 - openai/gpt-4o-2024-05-13 - openai/gpt-4o-mini-2024-07-18 - openai/chatgpt-4o-latest - openai/o1-preview-2024-09-12 - openai/o1-mini-2024-09-12 - anthropic/claude-3.5-sonnet - anthropic/claude-3.5-haiku - anthropic/claude-3-opus - anthropic/claude-2.1 - google/gemini-pro-1.5 - google/gemini-flash-1.5 - mistralai/mistral-large-2407 - mistralai/mistral-nemo - meta-llama/llama-3.1-70b-instruct - meta-llama/llama-3.1-405b-instruct - mistralai/mixtral-8x22b-instruct - cohere/command-r-plus - cohere/command-r
2023-11-08 2,000,000 text->text Router
microsoft/phi-3.5-mini-128k-instruct
Phi-3.5 models are lightweight, state-of-the-art open models. These models were trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties. Phi-3.5 Mini uses 3.8B parameters, and is a dense decoder-only transformer model using the same tokenizer as Phi-3 Mini. The models underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3.5 models showcased robust and state-of-the-art performance among models with less than 13 billion parameters.
2024-08-21 128,000 text->text Other
microsoft/phi-3-mini-128k-instruct:free
Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing. At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date.
2024-05-26 8,192 text->text Other
microsoft/phi-3-mini-128k-instruct
Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing. At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date.
2024-05-26 128,000 text->text Other
microsoft/phi-3-medium-128k-instruct:free
Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing. At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. In the MMLU-Pro eval, the model even comes close to a Llama3 70B level of performance. For 4k context length, try Phi-3 Medium 4K.
2024-05-24 8,192 text->text Other
microsoft/phi-3-medium-128k-instruct
Phi-3 128K Medium is a powerful 14-billion parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing. At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. In the MMLU-Pro eval, the model even comes close to a Llama3 70B level of performance. For 4k context length, try Phi-3 Medium 4K.
2024-05-24 128,000 text->text Other
Microsoft: Phi 4
$0.0003/1k
$0.0006/1k
microsoft/phi-4
Microsoft Research Phi-4 is designed to perform well in complex reasoning tasks and can operate efficiently in situations with limited memory or where quick responses are needed. At 14 billion parameters, it was trained on a mix of high-quality synthetic datasets, data from curated websites, and academic materials. It has undergone careful improvement to follow instructions accurately and maintain strong safety standards. It works best with English language inputs. For more information, please see Phi-4 Technical Report
2025-01-10 16,384 text->text Other
Liquid: LFM 40B MoE
$0.0006/1k
$0.0006/1k
liquid/lfm-40b
Liquid’s 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems. LFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals. See the launch announcement for benchmarks and more info.
2024-09-30 66,000 text->text Other
inflection/inflection-3-productivity
Inflection 3 Productivity is optimized for following instructions. It is better for tasks requiring JSON output or precise adherence to provided guidelines. It has access to recent news. For emotional intelligence similar to Pi, see Inflect 3 Pi See Inflection’s announcement for more details.
2024-10-11 8,000 text->text Other
Inflection: Inflection 3 Pi
$0.010/1k
$0.040/1k
inflection/inflection-3-pi
Inflection 3 Pi powers Inflection’s Pi chatbot, including backstory, emotional intelligence, productivity, and safety. It has access to recent news, and excels in scenarios like customer support and roleplay. Pi has been trained to mirror your tone and style, if you use more emojis, so will Pi! Try experimenting with various prompts and conversation styles.
2024-10-11 8,000 text->text Other
Databricks: DBRX 132B Instruct
$0.0043/1k
$0.0043/1k
databricks/dbrx-instruct
DBRX is a new open source large language model developed by Databricks. At 132B, it outperforms existing open source LLMs like Llama 2 70B and Mixtral-8x7b on standard industry benchmarks for language understanding, programming, math, and logic. It uses a fine-grained mixture-of-experts (MoE) architecture. 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. See the launch announcement and benchmark results here. moe
2024-03-29 32,768 text->text Other