ChatGPT Prompt Engineering DeepLearningAI
1.0.0
此崩潰和免費課程在Chatgpt提示工程上由DeepLearning.AI提供,並由Openai的Andrew Ng
和Isa Fulford
演講。
所有筆記本示例都在實驗室文件夾中可用。
加載API密鑰和相關的Python Libaries
import openai
import os
from dotenv import load_dotenv , find_dotenv
_ = load_dotenv ( find_dotenv ())
openai . api_key = os . getenv ( 'OPENAI_API_KEY' )
輔助功能
它使用OpenAI的gpt-3.5-turbo
模型和聊天完成端點。
def get_completion ( prompt , model = "gpt-3.5-turbo" ):
messages = [{ "role" : "user" , "content" : prompt }]
response = openai . ChatCompletion . create (
model = model ,
messages = messages ,
temperature = 0 , # this is the degree of randomness of the model's output
)
return response . choices [ 0 ]. message [ "content" ]
text = f"""
You should express what you want a model to do by
providing instructions that are as clear and
specific as you can possibly make them.
This will guide the model towards the desired output,
and reduce the chances of receiving irrelevant
or incorrect responses. Don't confuse writing a
clear prompt with writing a short prompt.
In many cases, longer prompts provide more clarity
and context for the model, which can lead to
more detailed and relevant outputs.
"""
prompt = f"""
Summarize the text delimited by triple backticks
into a single sentence.
``` { text } ```
"""
response = get_completion ( prompt )
print ( response )
Clear and specific instructions should be provided to guide a model towards the desired output, and longer prompts can provide more clarity and context for the model, leading to more detailed and relevant outputs.
主課程:
其他其他簡短的免費課程。