ChatGPT Prompt Engineering DeepLearningAI
1.0.0
Chatgpt 프롬프트 엔지니어링에 대한이 충돌 및 무료 코스는 Deeplearning.ai가 제공하고 Openai의 Andrew Ng
와 Isa Fulford
가 강의합니다.
모든 노트북 예제는 실험실 폴더에서 제공됩니다.
API 키 및 관련 파이썬 리바리를로드하십시오
import openai
import os
from dotenv import load_dotenv , find_dotenv
_ = load_dotenv ( find_dotenv ())
openai . api_key = os . getenv ( 'OPENAI_API_KEY' )
도우미 기능
OpenAI의 gpt-3.5-turbo
모델과 채팅 완료 엔드 포인트를 사용합니다.
def get_completion ( prompt , model = "gpt-3.5-turbo" ):
messages = [{ "role" : "user" , "content" : prompt }]
response = openai . ChatCompletion . create (
model = model ,
messages = messages ,
temperature = 0 , # this is the degree of randomness of the model's output
)
return response . choices [ 0 ]. message [ "content" ]
text = f"""
You should express what you want a model to do by
providing instructions that are as clear and
specific as you can possibly make them.
This will guide the model towards the desired output,
and reduce the chances of receiving irrelevant
or incorrect responses. Don't confuse writing a
clear prompt with writing a short prompt.
In many cases, longer prompts provide more clarity
and context for the model, which can lead to
more detailed and relevant outputs.
"""
prompt = f"""
Summarize the text delimited by triple backticks
into a single sentence.
``` { text } ```
"""
response = get_completion ( prompt )
print ( response )
Clear and specific instructions should be provided to guide a model towards the desired output, and longer prompts can provide more clarity and context for the model, leading to more detailed and relevant outputs.
Main Course :
다른 기타 짧은 무료 코스는 deeplearning.ai에서 이용할 수 있습니다.