FreeChat
Open main menu
ChatGPT知识
搜索
搜索
关闭
Previous
Previous
Next
Next
OpenAI-ChatGPT最新官方接口《聊天交互多轮对话》全网最详细中英文实用指南和教程,助你零基础快速轻松掌握全新技术(二)(附源码)
sockstack
/
158
/
2023-11-07 12:54:49
<p><span style="color: red; font-size: 18px">ChatGPT 可用网址,仅供交流学习使用,如对您有所帮助,请收藏并推荐给需要的朋友。</span><br><a href="https://ckai.xyz/?sockstack§ion=detail" target="__blank">https://ckai.xyz</a><br><br></p> <article class="baidu_pl"><div id="article_content" class="article_content clearfix"> <link rel="stylesheet" href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/editerView/kdoc_html_views-1a98987dfd.css"> <link rel="stylesheet" href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/editerView/ck_htmledit_views-25cebea3f9.css"> <div id="content_views" class="markdown_views prism-atom-one-dark"> <svg xmlns="http://www.w3.org/2000/svg" style="display: none;"><path stroke-linecap="round" d="M5,0 0,2.5 5,5z" id="raphael-marker-block" style="-webkit-tap-highlight-color: rgba(0, 0, 0, 0);"></path></svg><p></p> <div class="toc"> <h3>目录</h3> <ul> <li>Chat completions <sup>Beta</sup> 聊天交互</li> <li>前言</li> <li>Introduction 导言</li> <li><ul> <li>Response format 提示格式</li> <li>Managing tokens</li> <li><ul><li>Counting tokens for chat API calls 为聊天API调用标记计数</li></ul></li> </ul></li> <li>Instructing chat models 指导聊天模型</li> <li>Chat vs Completions 聊天与完成</li> <li>FAQ 问与答</li> <li>其它资料下载</li> </ul> </div> <p></p> <p><img referrerpolicy="no-referrer" src="https://img-blog.csdnimg.cn/d1b04b136a2c432ab78a578e861b4fa2.jpeg#pic_center" alt="在这里插入图片描述"></p> <h1> <a id="Chat_completions_supBetasup___6"></a>Chat completions <sup>Beta</sup> 聊天交互</h1> <p>Using the OpenAI Chat API, you can build your own applications with <code>gpt-3.5-turbo</code> and <code>gpt-4</code> to do things like:<br> 使用OpenAI Chat API,您可以使用 <code>gpt-3.5-turbo</code> 和<code>gpt-4</code> 构建自己的应用程序,以执行以下操作:</p> <ul> <li>Draft an email or other piece of writing<br> 起草一封电子邮件或其他书面材料</li> <li>Write Python code 编写Python代码</li> <li>Answer questions about a set of documents<br> 回答有关一组文档的问题</li> <li>Create conversational agents 创建会话代理</li> <li>Give your software a natural language interface<br> 给予你的软件一个自然语言界面</li> <li>Tutor in a range of subjects<br> 一系列科目的导师</li> <li>Translate languages 翻译语言</li> <li>Simulate characters for video games and much more<br> 模拟视频游戏的角色等等</li> </ul> <p>This guide explains how to make an API call for chat-based language models and shares tips for getting good results. You can also experiment with the new chat format in the OpenAI Playground.<br> 本指南解释了如何为基于聊天的语言模型进行API调用,并分享了获得良好结果的提示。您还可以在OpenAI Playground中尝试新的聊天格式。</p> <h1> <a id="_26"></a>前言</h1> <p>ChatGPT的<strong>聊天交互</strong>是与用户进行交流并为他们提供创新体验的最强大工具。一句话可以概况:“将聊天交互发挥到极致,定制专属用户体验,以此解决用户痛点,是最佳创新之道。”它可以帮助用户快速定位问题并获得正确的答案,提升用户体验的同时,也能提高工作效率、减少耗时。此外,ChatGPT还可以根据用户特征和需求,提供个性化的服务,让使用者在交互中感受到独一无二的体验。</p> <h1> <a id="Introduction__32"></a>Introduction 导言</h1> <p>Chat models take a series of messages as input, and return a model-generated message as output.<br> 聊天模型将一系列消息作为输入,并返回模型生成的消息作为输出。</p> <p>Although the chat format is designed to make multi-turn conversations easy, it’s just as useful for single-turn tasks without any conversations (such as those previously served by instruction following models like <code>text-davinci-003</code>).<br> 虽然聊天格式旨在使多轮对话变得容易,但它对于没有任何对话的单轮任务同样有用(例如以前由指令跟随模型(如 <code>text-davinci-003</code> )提供的任务)。</p> <p>An example API call looks as follows:<br> 示例API调用如下所示:</p> <pre><code class="prism language-python"><span class="token comment"># Note: you need to be using OpenAI Python v0.27.0 for the code below to work </span> <span class="token comment"># 注意:下面的代码需要使用OpenAI Python v0.27.0才能工作</span> <span class="token keyword">import</span> openaiopenai<span class="token punctuation">.</span>ChatCompletion<span class="token punctuation">.</span>create<span class="token punctuation">(</span>model<span class="token operator">=</span><span class="token string">"gpt-3.5-turbo"</span><span class="token punctuation">,</span>messages<span class="token operator">=</span><span class="token punctuation">[</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"system"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"You are a helpful assistant."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"user"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"Who won the world series in 2020?"</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"assistant"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"The Los Angeles Dodgers won the World Series in 2020."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"user"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"Where was it played?"</span><span class="token punctuation">}</span><span class="token punctuation">]</span> <span class="token punctuation">)</span> </code></pre> <p>The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either “system”, “user”, or “assistant”) and content (the content of the message). Conversations can be as short as 1 message or fill many pages.<br> 主输入是messages参数。消息必须是消息对象的数组,其中每个对象都有一个角色(“系统”、“用户”或“助理”)和内容(消息的内容)。对话可以短至1条消息或填写许多页面。</p> <p>Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages.<br> 通常,首先使用系统消息格式化对话,然后交替使用用户和助理消息。</p> <p>The system message helps set the behavior of the assistant. In the example above, the assistant was instructed with “You are a helpful assistant.”<br> 系统消息有助于设置助手的行为。在上面的示例中,助理被指示“你是一个有用的助理。”</p> <blockquote> <p>gpt-3.5-turbo-0301 does not always pay strong attention to system messages.Future models will be trained to pay stronger attention to system messages.<br> GPT-3.5-turbo-0301并不总是特别关注系统消息。未来的模型将被训练为更加关注系统消息。</p> </blockquote> <p>The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.<br> 用户消息帮助指示助理。它们可以由应用程序的最终用户生成,也可以由开发人员设置为指令。</p> <p>The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior.<br> 助理消息帮助存储先前的响应。它们也可以由开发人员编写,以帮助给予所需行为的示例。</p> <p>Including the conversation history helps when user instructions refer to prior messages. In the example above, the user’s final question of “Where was it played?” only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied via the conversation. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way.<br> 包括对话历史有助于用户指示引用先前的消息。在上面的示例中,用户的最终问题“在哪里播放的?“只有在之前关于2020年世界大赛的信息中才有意义。因为模型没有过去请求的记忆,所有相关信息必须通过会话提供。如果一个对话不能满足模型的标记限制,它将需要以某种方式缩短。</p> <h2> <a id="Response_format__78"></a>Response format 提示格式</h2> <p>An example API response looks as follows:<br> API响应示例如下所示:</p> <pre><code class="prism language-yaml"><span class="token punctuation">{<!-- --></span><span class="token key atrule">'id'</span><span class="token punctuation">:</span> <span class="token string">'chatcmpl-6p9XYPYSTTRi0xEviKjjilqrWU2Ve'</span><span class="token punctuation">,</span><span class="token key atrule">'object'</span><span class="token punctuation">:</span> <span class="token string">'chat.completion'</span><span class="token punctuation">,</span><span class="token key atrule">'created'</span><span class="token punctuation">:</span> <span class="token number">1677649420</span><span class="token punctuation">,</span><span class="token key atrule">'model'</span><span class="token punctuation">:</span> <span class="token string">'gpt-3.5-turbo'</span><span class="token punctuation">,</span><span class="token key atrule">'usage'</span><span class="token punctuation">:</span> <span class="token punctuation">{<!-- --></span><span class="token key atrule">'prompt_tokens'</span><span class="token punctuation">:</span> <span class="token number">56</span><span class="token punctuation">,</span> <span class="token key atrule">'completion_tokens'</span><span class="token punctuation">:</span> <span class="token number">31</span><span class="token punctuation">,</span> <span class="token key atrule">'total_tokens'</span><span class="token punctuation">:</span> <span class="token number">87</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token key atrule">'choices'</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token punctuation">{<!-- --></span><span class="token key atrule">'message'</span><span class="token punctuation">:</span> <span class="token punctuation">{<!-- --></span><span class="token key atrule">'role'</span><span class="token punctuation">:</span> <span class="token string">'assistant'</span><span class="token punctuation">,</span><span class="token key atrule">'content'</span><span class="token punctuation">:</span> <span class="token string">'The 2020 World Series was played in Arlington, Texas at the Globe Life Field, which was the new home stadium for the Texas Rangers.'</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token key atrule">'finish_reason'</span><span class="token punctuation">:</span> <span class="token string">'stop'</span><span class="token punctuation">,</span><span class="token key atrule">'index'</span><span class="token punctuation">:</span> <span class="token number">0</span><span class="token punctuation">}</span><span class="token punctuation">]</span> <span class="token punctuation">}</span> </code></pre> <p>In Python, the assistant’s reply can be extracted with <code>response['choices'][0]['message']['content']</code>.<br> 在Python中,助手的回复可以用 <code>response['choices'][0]['message']['content']</code> 提取。</p> <p>Every response will include a <code>finish_reason</code>. The possible values for <code>finish_reason</code> are:<br> 每个响应都将包含一个 <code>finish_reason</code> 。 <code>finish_reason</code> 的可能值为:</p> <ul> <li> <code>stop</code>: API returned complete model output<br> <code>stop</code> :API返回完整模型输出</li> <li> <code>length</code>: Incomplete model output due to max_tokens parameter or token limit<br> <code>length</code> :由于 max_tokens 参数或标记限制,模型输出不完整</li> <li> <code>content_filter</code>: Omitted content due to a flag from our content filters<br> <code>content_filter</code> :由于我们的内容过滤器中的标记而忽略了内容</li> <li> <code>null</code>: API response still in progress or incomplete<br> <code>null</code> :API响应仍在进行中或未完成</li> </ul> <h2> <a id="Managing_tokens_116"></a>Managing tokens</h2> <p>Language models read text in chunks called tokens. In English, a token can be as short as one character or as long as one word (e.g., <code>a</code> or<code>apple</code>), and in some languages tokens can be even shorter than one character or even longer than one word.<br> 语言模型以称为标记的块读取文本。在英语中,标记可以短至一个字符或长至一个单词(例如, <code>a</code> 或 <code>apple</code> ),并且在一些语言中,标记甚至可以比一个字符更短或者甚至比一个单词更长。</p> <p>For example, the string <code>"ChatGPT is great!"</code> is encoded into six tokens: <code>["Chat", "G", "PT", " is", " great", "!"]</code>.<br> 例如,字符串 <code>"ChatGPT is great!"</code> 被编码为六个标记: <code>["Chat", "G", "PT", " is", " great", "!"]</code>。</p> <p>The total number of tokens in an API call affects:<br> API调用中的标记总数影响:</p> <ul> <li>How much your API call costs, as you pay per token<br> 您的API调用成本是多少,按每个标记标记支付</li> <li>How long your API call takes, as writing more tokens takes more time<br> 您的API调用需要多长时间,因为编写更多标记需要更多时间</li> <li>Whether your API call works at all, as total tokens must be below the model’s maximum limit (4096 tokens for <code>gpt-3.5-turbo-0301</code>)<br> 您的API调用是否有效,会受到标记总数必须低于模型的最大限制( <code>gpt-3.5-turbo-0301</code> 为4096个标记)</li> </ul> <p>Both input and output tokens count toward these quantities. For example, if your API call used 10 tokens in the message input and you received 20 tokens in the message output, you would be billed for 30 tokens.<br> 输入和输出标记都计入这些数量。例如,如果您的API调用在消息输入中使用了10个标记,而您在消息输出中收到了20个标记,则您将收取30个标记的费用。</p> <p>To see how many tokens are used by an API call, check the <code>usage</code> field in the API response (e.g., <code>response['usage']['total_tokens']</code>).<br> 要查看API调用使用了多少标记,请检查API响应中的 <code>usage</code> 字段(例如,<code>response['usage']['total_tokens']</code>)。</p> <p>Chat models like <code>gpt-3.5-turbo</code> and <code>gpt-4</code> use tokens in the same way as other models, but because of their message-based formatting, it’s more difficult to count how many tokens will be used by a conversation.<br> 像 <code>gpt-3.5-turbo</code> 和 <code>gpt-4</code> 这样的聊天模型使用标记的方式与其他模型相同,但由于它们基于消息的格式,因此更难以计算会话将使用多少标记。</p> <h3> <a id="Counting_tokens_for_chat_API_calls_API_142"></a>Counting tokens for chat API calls 为聊天API调用标记计数</h3> <p>Below is an example function for counting tokens for messages passed to gpt-3.5-turbo-0301.<br> 下面是一个示例函数,用于对传递到gpt-3.5-turbo-0301的消息的标记进行计数。</p> <p>The exact way that messages are converted into tokens may change from model to model. So when future model versions are released, the answers returned by this function may be only approximate. The ChatML documentation explains how messages are converted into tokens by the OpenAI API, and may be useful for writing your own function.<br> 将消息转换为标记的确切方式可能因模型而异。因此,当未来的模型版本发布时,此函数返回的答案可能只是近似值。ChatML文档解释了OpenAI API如何将消息转换为标记,并且可能对编写您自己的函数很有用。</p> <pre><code class="prism language-python"><span class="token keyword">def</span> <span class="token function">num_tokens_from_messages</span><span class="token punctuation">(</span>messages<span class="token punctuation">,</span> model<span class="token operator">=</span><span class="token string">"gpt-3.5-turbo-0301"</span><span class="token punctuation">)</span><span class="token punctuation">:</span><span class="token triple-quoted-string string">"""Returns the number of tokens used by a list of messages."""</span><span class="token keyword">try</span><span class="token punctuation">:</span>encoding <span class="token operator">=</span> tiktoken<span class="token punctuation">.</span>encoding_for_model<span class="token punctuation">(</span>model<span class="token punctuation">)</span><span class="token keyword">except</span> KeyError<span class="token punctuation">:</span>encoding <span class="token operator">=</span> tiktoken<span class="token punctuation">.</span>get_encoding<span class="token punctuation">(</span><span class="token string">"cl100k_base"</span><span class="token punctuation">)</span><span class="token keyword">if</span> model <span class="token operator">==</span> <span class="token string">"gpt-3.5-turbo-0301"</span><span class="token punctuation">:</span> <span class="token comment"># note: future models may deviate from this</span>num_tokens <span class="token operator">=</span> <span class="token number">0</span><span class="token keyword">for</span> message <span class="token keyword">in</span> messages<span class="token punctuation">:</span>num_tokens <span class="token operator">+=</span> <span class="token number">4</span> <span class="token comment"># every message follows <im_start>{role/name}\n{content}<im_end>\n</span><span class="token keyword">for</span> key<span class="token punctuation">,</span> value <span class="token keyword">in</span> message<span class="token punctuation">.</span>items<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">:</span>num_tokens <span class="token operator">+=</span> <span class="token builtin">len</span><span class="token punctuation">(</span>encoding<span class="token punctuation">.</span>encode<span class="token punctuation">(</span>value<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token keyword">if</span> key <span class="token operator">==</span> <span class="token string">"name"</span><span class="token punctuation">:</span> <span class="token comment"># if there's a name, the role is omitted</span>num_tokens <span class="token operator">+=</span> <span class="token operator">-</span><span class="token number">1</span> <span class="token comment"># role is always required and always 1 token</span>num_tokens <span class="token operator">+=</span> <span class="token number">2</span> <span class="token comment"># every reply is primed with <im_start>assistant</span><span class="token keyword">return</span> num_tokens<span class="token keyword">else</span><span class="token punctuation">:</span><span class="token keyword">raise</span> NotImplementedError<span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f"""num_tokens_from_messages() is not presently implemented for model </span><span class="token interpolation"><span class="token punctuation">{<!-- --></span>model<span class="token punctuation">}</span></span><span class="token string">.See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens."""</span></span><span class="token punctuation">)</span> </code></pre> <p>Next, create a message and pass it to the function defined above to see the token count, this should match the value returned by the API usage parameter:<br> 接下来,创建一条消息并将其传递给上面定义的函数以查看标记计数,这应该与API使用参数返回的值匹配:</p> <pre><code class="prism language-python">messages <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"system"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"You are a helpful, pattern-following assistant that translates corporate jargon into plain English."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"system"</span><span class="token punctuation">,</span> <span class="token string">"name"</span><span class="token punctuation">:</span><span class="token string">"example_user"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"New synergies will help drive top-line growth."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"system"</span><span class="token punctuation">,</span> <span class="token string">"name"</span><span class="token punctuation">:</span> <span class="token string">"example_assistant"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"Things working well together will increase revenue."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"system"</span><span class="token punctuation">,</span> <span class="token string">"name"</span><span class="token punctuation">:</span><span class="token string">"example_user"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"system"</span><span class="token punctuation">,</span> <span class="token string">"name"</span><span class="token punctuation">:</span> <span class="token string">"example_assistant"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"Let's talk later when we're less busy about how to do better."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token string">"role"</span><span class="token punctuation">:</span> <span class="token string">"user"</span><span class="token punctuation">,</span> <span class="token string">"content"</span><span class="token punctuation">:</span> <span class="token string">"This late pivot means we don't have time to boil the ocean for the client deliverable."</span><span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token punctuation">]</span>model <span class="token operator">=</span> <span class="token string">"gpt-3.5-turbo-0301"</span><span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f"</span><span class="token interpolation"><span class="token punctuation">{<!-- --></span>num_tokens_from_messages<span class="token punctuation">(</span>messages<span class="token punctuation">,</span> model<span class="token punctuation">)</span><span class="token punctuation">}</span></span><span class="token string"> prompt tokens counted."</span></span><span class="token punctuation">)</span> <span class="token comment"># Should show ~126 total_tokens</span> </code></pre> <p>To confirm the number generated by our function above is the same as what the API returns, create a new Chat Completion:<br> 要确认我们上面的函数生成的数字与API返回的数字相同,请创建一个新的聊天完成:</p> <pre><code class="prism language-python"><span class="token comment"># example token count from the OpenAI API</span> <span class="token keyword">import</span> openairesponse <span class="token operator">=</span> openai<span class="token punctuation">.</span>ChatCompletion<span class="token punctuation">.</span>create<span class="token punctuation">(</span>model<span class="token operator">=</span>model<span class="token punctuation">,</span>messages<span class="token operator">=</span>messages<span class="token punctuation">,</span>temperature<span class="token operator">=</span><span class="token number">0</span><span class="token punctuation">,</span> <span class="token punctuation">)</span><span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string-interpolation"><span class="token string">f'</span><span class="token interpolation"><span class="token punctuation">{<!-- --></span>response<span class="token punctuation">[</span><span class="token string">"usage"</span><span class="token punctuation">]</span><span class="token punctuation">[</span><span class="token string">"prompt_tokens"</span><span class="token punctuation">]</span><span class="token punctuation">}</span></span><span class="token string"> prompt tokens used.'</span></span><span class="token punctuation">)</span> </code></pre> <p>To see how many tokens are in a text string without making an API call, use OpenAI’s tiktoken Python library. Example code can be found in the OpenAI Cookbook’s guide on how to count tokens with tiktoken.<br> 要在不进行API调用的情况下查看文本字符串中有多少标记,请使用OpenAI的tiktoken Python库。示例代码可以在OpenAI Cookbook关于如何使用tiktoken进行标记计数的指南中找到。</p> <p>Each message passed to the API consumes the number of tokens in the content, role, and other fields, plus a few extra for behind-the-scenes formatting. This may change slightly in the future.<br> 传递给API的每条消息都会消耗内容、角色和其他字段中的标记数量,外加一些用于幕后格式化的额外标记。这在未来可能会略有变化。</p> <p>If a conversation has too many tokens to fit within a model’s maximum limit (e.g., more than 4096 tokens for <code>gpt-3.5-turbo</code>), you will have to truncate, omit, or otherwise shrink your text until it fits. Beware that if a message is removed from the messages input, the model will lose all knowledge of it.<br> 如果对话具有太多标记而不能适应模型的最大限制(例如,超过4096个标记(对于 <code>gpt-3.5-turbo</code> ),您将不得不截断、省略或以其他方式缩小文本,直到它适合为止。请注意,如果从消息输入中删除了一条消息,则模型将丢失所有关于它的知识。</p> <p>Note too that very long conversations are more likely to receive incomplete replies. For example, a <code>gpt-3.5-turbo</code>conversation that is 4090 tokens long will have its reply cut off after just 6 tokens.<br> 也要注意,很长的对话更有可能收到不完整的回复。例如,长度为4090个标记的 <code>gpt-3.5-turbo</code>会话将在仅6个标记之后切断其回复。</p> <h1> <a id="Instructing_chat_models__217"></a>Instructing chat models 指导聊天模型</h1> <p>Best practices for instructing models may change from model version to version. The advice that follows applies to <code>gpt-3.5-turbo-0301</code> and may not apply to future models.<br> 指导模型的最佳实践可能因模型版本而异。以下建议适用于<code>gpt-3. 5-turbo-0301</code>,可能不适用于未来的模型。</p> <p>Many conversations begin with a system message to gently instruct the assistant. For example, here is one of the system messages used for ChatGPT:<br> 许多对话以系统消息开始,以温和地指示助理。例如,以下是用于ChatGPT的系统消息之一:</p> <blockquote> <p>You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible. Knowledge cutoff: {knowledge_cutoff} Current date: {current_date}<br> 你是ChatGPT,一个由OpenAI训练的大型语言模型。尽可能简明扼要地回答。知识截止:{knowledge_cutoff} 当前日期: {current_date}</p> </blockquote> <p>In general, <code>gpt-3.5-turbo-0301</code> does not pay strong attention to the system message, and therefore important instructions are often better placed in a user message.<br> 一般来说, <code>gpt-3.5-turbo-0301</code> 不太关注系统消息,因此重要的指令通常最好放在用户消息中。</p> <p>If the model isn’t generating the output you want, feel free to iterate and experiment with potential improvements. You can try approaches like:<br> 如果模型没有生成您想要的输出,请随意迭代并尝试潜在的改进。您可以尝试以下方法:</p> <ul> <li>Make your instruction more explicit<br> 让你的指示更明确</li> <li>Specify the format you want the answer in<br> 指定您希望答案采用的格式</li> <li>Ask the model to think step by step or debate pros and cons before settling on an answer<br> 让模型一步一步地思考,或者在确定答案之前讨论利弊</li> </ul> <p>For more prompt engineering ideas, read the OpenAI Cookbook guide on techniques to improve reliability.<br> 有关更多快速工程想法,请阅读OpenAI Cookbook关于提高可靠性的技术指南。</p> <p>Beyond the system message, the <code>temperature</code> and <code>max tokens</code> are two of many options developers have to influence the output of the chat models. For <code>temperature</code>, higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. In the case of <code>max tokens</code>, if you want to limit a response to a certain length, max tokens can be set to an arbitrary number. This may cause issues for example if you set the max tokens value to 5 since the output will be cut-off and the result will not make sense to users.<br> 除了系统消息之外, <code>temperature</code> 和<code>max tokens</code>是开发人员必须影响聊天模型输出的许多选项中的两个。对于<code>temperature</code>,较高的值(如0.8)将使输出更加随机,而较低的值(如0.2)将使其更加集中和确定。在<code>max tokens</code>的情况下,如果您希望将响应限制为特定长度,则可以将<code>max tokens</code>设置为任意数字。这可能会导致问题,例如,如果您将最大标记值设置为5,因为输出将被切断,结果对用户没有意义。</p> <h1> <a id="Chat_vs_Completions__246"></a>Chat vs Completions 聊天与完成</h1> <p>Because <code>gpt-3.5-turbo</code> performs at a similar capability to text-davinci-003 but at 10% the price per token, we recommend <code>gpt-3.5-turbo</code> for most use cases.<br> 由于 <code>gpt-3.5-turbo</code> 的性能与 <code>text-davinci-003</code> 相似,但每个标记的价格为10%,因此我们建议在大多数用例中使用 <code>gpt-3.5-turbo</code> 。</p> <p>For many developers, the transition is as simple as rewriting and retesting a prompt.<br> 对于许多开发人员来说,转换就像重写和重新测试提示一样简单。</p> <p>For example, if you translated English to French with the following completions prompt:<br> 例如,如果您将英语翻译为法语,并出现以下完成提示:</p> <pre><code class="prism language-yaml"><span class="token key atrule">Translate the following English text to French</span><span class="token punctuation">:</span> <span class="token string">"{text}"</span> </code></pre> <p>An equivalent chat conversation could look like:<br> 类似的聊天对话可能如下所示:</p> <pre><code class="prism language-yaml"><span class="token punctuation">[</span><span class="token punctuation">{<!-- --></span><span class="token key atrule">"role"</span><span class="token punctuation">:</span> <span class="token string">"system"</span><span class="token punctuation">,</span> <span class="token key atrule">"content"</span><span class="token punctuation">:</span> <span class="token string">"You are a helpful assistant that translates English to French."</span><span class="token punctuation">}</span><span class="token punctuation">,</span><span class="token punctuation">{<!-- --></span><span class="token key atrule">"role"</span><span class="token punctuation">:</span> <span class="token string">"user"</span><span class="token punctuation">,</span> <span class="token key atrule">"content"</span><span class="token punctuation">:</span> <span class="token string">'Translate the following English text to French: "{text}"'</span><span class="token punctuation">}</span> <span class="token punctuation">]</span> </code></pre> <p>Or even just the user message:<br> 或者只是用户消息:</p> <pre><code class="prism language-yaml"><span class="token punctuation">[</span><span class="token punctuation">{<!-- --></span><span class="token key atrule">"role"</span><span class="token punctuation">:</span> <span class="token string">"user"</span><span class="token punctuation">,</span> <span class="token key atrule">"content"</span><span class="token punctuation">:</span> <span class="token string">'Translate the following English text to French: "{text}"'</span><span class="token punctuation">}</span> <span class="token punctuation">]</span> </code></pre> <h1> <a id="FAQ__277"></a>FAQ 问与答</h1> <p><strong>Is fine-tuning available for <code>gpt-3.5-turbo</code>?<br> <code>gpt-3.5-turbo</code> 是否可进行微调?</strong><br> No. As of Mar 1, 2023, you can only fine-tune base GPT-3 models. See the fine-tuning guide for more details on how to use fine-tuned models.<br> 不可以。自2023年3月1日起,您只能微调基本GPT-3模型。有关如何使用微调模型的更多详细信息,请参阅微调指南。</p> <p><strong>Do you store the data that is passed into the API?<br> 您是否存储传递到API中的数据?</strong><br> As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy.<br> 自2023年3月1日起,我们将保留您的API数据30天,但不再使用您通过API发送的数据来改进我们的模型。在我们的数据使用政策中了解更多信息。</p> <p><strong>Adding a moderation layer 添加审核层</strong><br> If you want to add a moderation layer to the outputs of the Chat API, you can follow our moderation guide to prevent content that violates OpenAI’s usage policies from being shown.<br> 如果您想在Chat API的输出中添加审核层,您可以遵循我们的审核指南,以防止显示违反OpenAI使用策略的内容。</p> <h1> <a id="_292"></a>其它资料下载</h1> <p>如果大家想继续了解人工智能相关学习路线和知识体系,欢迎大家翻阅我的另外一篇博客《重磅 | 完备的人工智能AI 学习——基础知识学习路线,所有资料免关注免套路直接网盘下载》<br> 这篇博客参考了Github知名开源平台,AI技术平台以及相关领域专家:Datawhale,ApacheCN,AI有道和黄海广博士等约有近100G相关资料,希望能帮助到所有小伙伴们。</p> </div> <link href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/editerView/markdown_views-0407448025.css" rel="stylesheet"> <link href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/style-c216769e99.css" rel="stylesheet"> </div> <div id="treeSkill"></div> </article>
OpenAI-ChatGPT最新官方接口《聊天交互多轮对话》全网最详细中英文实用指南和教程,助你零基础快速轻松掌握全新技术(二)(附源码)
作者
sockstack
许可协议
CC BY 4.0
发布于
2023-11-07
修改于
2024-11-03
上一篇:软件:常用 Linux 软件汇总,值得收藏
下一篇:OpenAI-ChatGPT最新官方接口《嵌入向量式文本转换》全网最详细中英文实用指南和教程,助你零基础快速轻松掌握全新技术(五)(附源码)