該存儲庫適用於Raku(數據)軟件包,以促進大型語言模型(LLM)提示的創建,存儲,檢索和策展。
這是在Jupyter聊天書中使用提示域特定語言(DSL)的示例,[AA2,AAP2]:
來自ZEF的生態系統:
zef install LLM::Prompts
來自Github:
zef install https://github.com/antononcube/Raku-LLM-Prompts.git
加載軟件包“ LLM ::提示”,[AAP1]和“ LLM :: functions”,[AAP2]:
使用llm ::提示;使用llm :: functions;
# (Any)
顯示名為“ ftfy”的提示的記錄:
.say for | llm-prompt-data <ftfy>;
# NamedArguments => [] # Description => Use Fixed That For You to quickly correct spelling and grammar mistakes # Categories => (Function Prompts) # PositionalArguments => {$a => } # PromptText => -> $a='' {"Find and correct grammar and spelling mistakes in the following text. # Response with the corrected text and nothing else. # Provide no context for the corrections, only correct the text. # $a"} # Topics => (General Text Manipulation) # URL => https://resources.wolframcloud.com/PromptRepository/resources/FTFY # Keywords => [Spell check Grammar Check Text Assistance] # Name => FTFY # ContributedBy => Wolfram Staff # Arity => 1
以下是通過提示名稱應用的提示數據檢索提示數據的示例:
.Say for LLM-Prompt-Data(/sc/)
# ScientificDejargonize => Translate scientific jargon to plain language # ScriptToNarrative => Generate narrative text from a formatted screenplay or stage play # ScientificJargonized => Give output written in scientific jargon # ScienceEnthusiast => A smarter today for a brighter tomorrow # ScientificJargonize => Add scientific jargon to plain text # NarrativeToScript => Rewrite a block of prose as a screenplay or stage play
下面的“提示數據”部分給出了更多提示的檢索示例。
從名為“ ftfy”的提示符中製作llm函數:
my&f = llm功能(llm-prompt('ftfy'));
# -> **@args, *%args { #`(Block|5411904228544) ... }
使用LLM函數糾正句子的語法:
&f(“他現在在哪里工作?”)
# Where does he work now?
使用提示“ CodeWriter”生成Raku代碼:
llm synthesize([llm-prompt('codewriter'),“模擬隨機步行。”])
RandomWalk [N_]:=累積[Randomchoice [{ - 1,1},n]] ListLinePlot [RandomWalk [1000]]
可以使用[SW1]中描述的聊天書提示DSL提示擴展,可以使用llm-prompt-expand
函數完成:
llm-prompt-expand(“什麼是內燃機?#eli5')
# What is an internal combustion engine? Answer questions as if the listener is a five year old child.
在這裡,我們得到了實際的LLM答案:
使用文本:: utils:all;'什麼是內燃機? #eli5'==> llm-prompt-expand()==> llm-synthesize()==> wrap-paragraph()==> join(“ n”)
# An internal combustion engine is like a big machine that uses tiny explosions # inside to make things go vroom vroom, like in cars and trucks!
這是使用角色和兩個修飾符的另一個示例:
我的$ prmt = llm-prompt-expand(“@southernbellespeak到火星的輕行程距離是什麼?#eli5 #moodified | sad”)
# You are Miss Anne. # You speak only using Southern Belle terminology and slang. # Your personality is elegant and refined. # Only return responses as if you were a Southern Belle. # Never break the Southern Belle character. # You speak with a Southern drawl. # What is light travel distance to Mars? Answer questions as if the listener is a five year old child. # Modify your response to convey a sad mood. # Use language that conveys that emotion clearly. # Do answer the question clearly and truthfully. # Do not use language that is outside of the specified mood. # Do not use racist, homophobic, sexist, or ableist language.
在這裡,我們得到了實際的LLM答案:
$ prmt ==> llm-prompt-expand()==> llm-synthesize()==> wrap-paragraph()==> join> join(“ n”)
# Oh, bless your heart, darlin'. The distance from Earth to Mars can vary # depending on their positions in orbit, but on average it's about 225 million # kilometers. Isn't that just plum fascinating? Oh, sweet child, the distance to # Mars weighs heavy on my heart. It's a long journey, full of loneliness and # longing. But we must endure, for the sake of discovery and wonder.
針對指定提示的域特定語言(DSL)的更正式描述具有以下元素:
及時的角色可以用“@”“解決”。例如:
@Yoda Life can be easy, but some people instist for it to be difficult.
可以在提示規格的末尾指定一個或幾個修飾符提示。例如:
Summer is over, school is coming soon. #HaikuStyled
Summer is over, school is coming soon. #HaikuStyled #Translated|Russian
可以指定函數用“!”應用“在整個牢房”中應用。並將提示規格放置在提示規格的開始時,要擴展。例如:
!Translated|Portuguese Summer is over, school is coming soon
函數可以指定用於使用“!”的“以前的“消息”)。並用一個指針“^”或“ ^^”提示。前者的意思是“最後一條消息”,後者的意思是“所有消息”。
可以將消息與選項參數一起提供:@messages
llm-prompt-expand
的@messages。
例如:
!ShortLineIt^
這是一個提示擴展規範的表(或多或少與[SW1]中的一個相同):
規格 | 解釋 |
---|---|
@姓名 | 直接與角色聊天 |
#姓名 | 使用修飾符提示 |
呢姓名 | 使用當前單元格的輸入使用功能提示 |
呢名稱> | «與上述相同» |
& name > | «與上述相同» |
呢名稱^ | 使用以前的聊天消息使用功能提示 |
呢名稱^^ | 使用所有以前的聊天消息使用功能提示 |
呢名稱│參數... | 包括提示的參數 |
備註:功能提示可以兩個sigils“!”和 ”&”。
備註:及時擴展使LLM-Chatbook的使用變得更加容易。請參閱“ Jupyter :: ChatBook”,[AAP3]。
以下是如何獲得提示數據:
LLM-PROMPT-DATA.ELEMS
# 222
以下是通過提示名稱應用的提示數據檢索提示數據的示例:
。
# EmailWriter => (Generate an email based on a given topic (Personas)) # EmojiTranslate => (Translate text into an emoji representation (Function Prompts)) # EmojiTranslated => (Get a response translated to emoji (Modifier Prompts)) # Emojified => (Provide responses that include emojis within the text (Modifier Prompts)) # Emojify => (Replace key words in text with emojis (Function Prompts))
在許多情況下,最好以較長的格式獲得及時數據或任何數據。可以使用llm-prompt-dataset
函數獲得長格式的提示數據:
使用data ::重塑者;使用data :: summarizers; llm-prompt-dataset.pick(6)==> to-topetty-table(align =>'l',field-names => <名稱描述變量值>)
# +-------------------+-----------------------------------------------------------------------------------------------------------------------------+----------+------------------+ # | Name | Description | Variable | Value | # +-------------------+-----------------------------------------------------------------------------------------------------------------------------+----------+------------------+ # | ShortLineIt | Format text to have shorter lines | Keywords | Automatic breaks | # | Rick | A chatbot that will never let you down | Topics | Chats | # | HarlequinWriter | A sensual AI for the romantics | Keywords | Romantic | # | Informal | Write an informal invitation to an event | Keywords | Unceremoniously | # | TravelAdvisor | Navigate your journey effortlessly with Travel Advisor, your digital companion for personalized travel planning and booking | Keywords | Vacation | # | NarrativeToScript | Rewrite a block of prose as a screenplay or stage play | Topics | Text Generation | # +-------------------+-----------------------------------------------------------------------------------------------------------------------------+----------+------------------+
這是提示類別的細分:
select-columns(llm-prompt-dataset,<變量值>)。grep({$ _ <variable> eq'cantories'})==>記錄 - 薩默里
#ERROR: Do not know how to summarize the argument. # +-------------------+-------+ # | Variable | Value | # +-------------------+-------+ # | Categories => 225 | | # +-------------------+-------+
以緊湊的格式獲得了所有修飾符提示:
llm-prompt-dataset():修飾符:compact ==> topetty-table(field-names => <名稱描述類別>,align =>'l')
# +-----------------------+------------------------------------------------------------------------------+-----------------------------------+ # | Name | Description | Categories | # +-----------------------+------------------------------------------------------------------------------+-----------------------------------+ # | AbstractStyled | Get responses in the style of an academic abstract | Modifier Prompts | # | AlwaysAQuestion | Modify output to always be inquisitive | Modifier Prompts | # | AlwaysARiddle | Riddle me this, riddle me that | Modifier Prompts | # | AphorismStyled | Write the response as an aphorism | Modifier Prompts | # | BadGrammar | Provide answers using incorrect grammar | Modifier Prompts | # | CompleteSentence | Answer a question in one complete sentence | Modifier Prompts | # | ComplexWordsPreferred | Modify text to use more complex words | Modifier Prompts | # | DatasetForm | Convert text to a wolfram language Dataset | Modifier Prompts | # | Disclaimered | Modify responses in the form of a disclaimer | Modifier Prompts | # | ELI5 | Explain like I'm five | Modifier Prompts Function Prompts | # | ElevatorPitch | Write the response as an elevator pitch | Modifier Prompts | # | EmojiTranslated | Get a response translated to emoji | Modifier Prompts | # | Emojified | Provide responses that include emojis within the text | Modifier Prompts | # | FictionQuestioned | Generate questions for a fictional paragraph | Modifier Prompts | # | Formal | Rewrite text to sound more formal | Modifier Prompts | # | GradeLevelSuited | Respond with answers that the specified US grade level can understand | Modifier Prompts | # | HaikuStyled | Change responses to haiku form | Modifier Prompts | # | Informal | Write an informal invitation to an event | Modifier Prompts | # | JSON | Respond with JavaScript Object Notation format | Modifier Prompts | # | KnowAboutMe | Give the LLM an FYI | Modifier Prompts | # | LegalJargonized | Provide answers using legal jargon | Modifier Prompts | # | LimerickStyled | Receive answers in the form of a limerick | Modifier Prompts | # | MarketingJargonized | Transforms replies to marketing | Modifier Prompts | # | MedicalJargonized | Transform replies into medial jargon | Modifier Prompts | # | Moodified | Modify an answer to express a certain mood | Modifier Prompts | # | NothingElse | Give output in specified form, no other additions | Modifier Prompts | # | NumericOnly | Modify results to give numerical responses only | Modifier Prompts | # | OppositeDay | It's not opposite day today, so everything will work just the way you expect | Modifier Prompts | # | Pitchified | Give output as a sales pitch | Modifier Prompts | # | PoemStyled | Receive answers as poetry | Modifier Prompts | # | SEOOptimized | Modify output to only give highly searched terms | Modifier Prompts | # | ScientificJargonized | Give output written in scientific jargon | Modifier Prompts | # | Setting | Modify an answer to establish a sense of place | Modifier Prompts | # | ShortLineIt | Format text to have shorter lines | Modifier Prompts Function Prompts | # | SimpleWordsPreferred | Provide responses with simple words | Modifier Prompts | # | SlideDeck | Get responses as a slide presentation | Modifier Prompts | # | TSV | Convert text to a tab-separated-value formatted table | Modifier Prompts | # | TargetAudience | Word your response for a target audience | Modifier Prompts | # | Translated | Write the response in a specified language | Modifier Prompts | # | Unhedged | Rewrite a sentence to be more assertive | Modifier Prompts | # | YesNo | Responds with Yes or No exclusively | Modifier Prompts | # +-----------------------+------------------------------------------------------------------------------+-----------------------------------+
備註:副詞:functions
, :modifiers
和:personas
意味著只有帶有相應類別的提示才會返回。
備註:副詞:compact
, :functions
, :modifiers
, :personas
具有相應的快捷方式:c
, :f
, :m
和:p
。
提示的原始(對於此包)集合是在Wolfram提示存儲庫(WPR),[SW2]託管的提示文本的(不是很小的)樣本。軟件包中WPR的所有提示都具有相應的貢獻者和相應WPR頁面的URL。
使用WPR的格式添加了Google/Bard/Palm和OpenAI/Chatgpt的示例提示。
具有編程性添加新提示的能力至關重要。 (尚未實施 - 請參見下面的待辦事項。)
最初實施了提示DSL語法和相應的擴展動作。但是,具有語法很可能不需要,並且最好使用“及時擴展”(通過基於正則替代的替換)。
可以使用子llm-prompt-expand
“只是擴展”提示。
這是一個流程圖,總結了jupyter聊天書聊天單元的及時解析和擴展,[aap3]:
流圖LR
Openai {{openai}}
棕櫚{{palm}}
llmfunc [[LLM :: functions]]
llmprom [[LLM ::提示]]
codb [(聊天對象)]
PDB [(提示)]
ccell [/聊天單元格/]
crcell [/聊天結果單元格/]
CIDQ {聊天ID <br>指定?}
cideq {聊天id <br>存在於db中?}
reco [檢索現有<br>聊天對象]
[消息<br>評估]
PROMPARSE [提示<br> DSL規格解析]
kpfq {已知<br>提示<br>找到?}
promexp [提示<br>擴展]
CNCO [創建新<br>聊天對象]
cidnone [“假設聊天ID <br>是'無'”]
子圖聊天書前端
CCELL
克塞爾
結尾
子圖聊天書後端
CIDQ
Cideq
CIDNONE
reco
CNCO
鱈魚
結尾
子圖提示處理
PDB
llmprom
PROMPARSE
kpfq
Promexp
結尾
子圖LLM相互作用
coeval
llmfunc
棕櫚
Openai
結尾
CCELL-> CIDQ
CIDQ - > |是| Cideq
CIDEQ-> |是| reco
RECO-> PROMPARSE
coeval-> crcell
CIDEQ -.- CODB
cideq-> | no | CNCO
llmfunc -.- cnco -.- codb
CNCO-> PROMPARSE-> KPFQ
KPFQ - > |是| Promexp
kpfq-> | no | coeval
PROMPARSE -.- LLMPROM
promexp -.- llmprom
promexp-> coeval
llmprom -.- PDB
CIDQ - > | no | CIDNONE
cidnone-> cideq
coeval -.- llmfunc
llmfunc <-.-> OpenAi
llmfunc <-.->棕櫚
載入中這是在通用LLM聊天單元格中提示擴展的一個示例,並聊天元單元格顯示了相應的聊天對象的內容:
該軟件包提供了命令行接口(CLI)腳本:
LLM-PROMPT-HERP
# Usage: # llm-prompt <name> [<args> ...] -- Retrieves prompts text for given names or regexes. # # <name> Name of a prompt or a regex. (E.g. 'rx/ ^ Em .* /'). # [<args> ...] Arguments for the prompt (if applicable).
這是一個提示名稱的示例:
llm-prompt Nothingelse raku
# ONLY give output in the form of a RAKU. # Never explain, suggest, or converse. Only return output in the specified form. # If code is requested, give only code, no explanations or accompanying text. # If a table is requested, give only a table, no other explanations or accompanying text. # Do not describe your output. # Do not explain your output. # Do not suggest anything. # Do not respond with anything other than the singularly demanded output. # Do not apologize if you are incorrect, simply try again, never apologize or add text. # Do not add anything to the output, give only the output as requested. Your outputs can take any form as long as requested.
這是一個正則示例:
llm-prompt'rx / ^ n。* /'
# NarrativeToResume => Rewrite narrative text as a resume # NarrativeToScript => Rewrite a block of prose as a screenplay or stage play # NerdSpeak => All the nerd, minus the pocket protector # NothingElse => Give output in specified form, no other additions # NumericOnly => Modify results to give numerical responses only # NutritionistBot => Personal nutrition advisor AI
待辦事項
使用XDG數據目錄完成。
完成了及時的模具
完成用戶提示攝入並增加了主要提示
通過修改現有提示來進行待辦事項。
TODO自動提示模板填充。
托多引導模板填充。
基於DSL
基於todo llm
完成及時檢索副詞
完成了提示DSL語法和動作
完成及時規格擴展
完成CLI以及時檢索
也許CLI用於及時數據集
托多添加用戶/本地提示
完成了更多提示
完成了Google的吟遊些示例提示
取消Openai的Chatgpt示例提示
Profsynapse提示
Google Or-Tools提示
待辦事項
完成的聊天書使用
典型用法
完成查詢(攝入)提示
完成了提示DSL
每天通過CLI做笑話
TODO及時格式
劫持提示
托多圖
[AA1] Anton Antonov,“具有LLM功能的工作流”,(2023),WordPress的Rakuforprediction。
[SW1] Stephen Wolfram,“ LLM功能的新世界:將LLM技術集成到Wolfram語言中”,(2023年),Stephen Wolfram著作。
[SW2] Stephen Wolfram,“工作與播放的提示:啟動Wolfram提示庫”,(2023年),Stephen Wolfram著作。
[AAP1] Anton Antonov,LLM ::提示Raku軟件包,(2023),Github/Antononcube。
[AAP2] Anton Antonov,LLM :: Functions Raku軟件包,(2023),GitHub/AntononCube。
[AAP3] Anton Antonov,Jupyter :: Chatbook Raku軟件包,(2023),Github/Antononcube。
[WRIR1] Wolfram Research,Inc。,Wolfram提示存儲庫