该存储库适用于Raku(数据)软件包,以促进大型语言模型(LLM)提示的创建,存储,检索和策展。
这是在Jupyter聊天书中使用提示域特定语言(DSL)的示例,[AA2,AAP2]:
来自ZEF的生态系统:
zef install LLM::Prompts
来自Github:
zef install https://github.com/antononcube/Raku-LLM-Prompts.git
加载软件包“ LLM ::提示”,[AAP1]和“ LLM :: functions”,[AAP2]:
使用llm ::提示;使用llm :: functions;
# (Any)
显示名为“ ftfy”的提示的记录:
.say for | llm-prompt-data <ftfy>;
# NamedArguments => [] # Description => Use Fixed That For You to quickly correct spelling and grammar mistakes # Categories => (Function Prompts) # PositionalArguments => {$a => } # PromptText => -> $a='' {"Find and correct grammar and spelling mistakes in the following text. # Response with the corrected text and nothing else. # Provide no context for the corrections, only correct the text. # $a"} # Topics => (General Text Manipulation) # URL => https://resources.wolframcloud.com/PromptRepository/resources/FTFY # Keywords => [Spell check Grammar Check Text Assistance] # Name => FTFY # ContributedBy => Wolfram Staff # Arity => 1
以下是通过提示名称应用的提示数据检索提示数据的示例:
.Say for LLM-Prompt-Data(/sc/)
# ScientificDejargonize => Translate scientific jargon to plain language # ScriptToNarrative => Generate narrative text from a formatted screenplay or stage play # ScientificJargonized => Give output written in scientific jargon # ScienceEnthusiast => A smarter today for a brighter tomorrow # ScientificJargonize => Add scientific jargon to plain text # NarrativeToScript => Rewrite a block of prose as a screenplay or stage play
下面的“提示数据”部分给出了更多提示的检索示例。
从名为“ ftfy”的提示符中制作llm函数:
my&f = llm功能(llm-prompt('ftfy'));
# -> **@args, *%args { #`(Block|5411904228544) ... }
使用LLM函数纠正句子的语法:
&f(“他现在在哪里工作?”)
# Where does he work now?
使用提示“ CodeWriter”生成Raku代码:
llm synthesize([llm-prompt('codewriter'),“模拟随机步行。”])
RandomWalk [N_]:=累积[Randomchoice [{ - 1,1},n]] ListLinePlot [RandomWalk [1000]]
可以使用[SW1]中描述的聊天书提示DSL提示扩展,可以使用llm-prompt-expand
函数完成:
llm-prompt-expand(“什么是内燃机?#eli5')
# What is an internal combustion engine? Answer questions as if the listener is a five year old child.
在这里,我们得到了实际的LLM答案:
使用文本:: utils:all;'什么是内燃机? #eli5'==> llm-prompt-expand()==> llm-synthesize()==> wrap-paragraph()==> join(“ n”)
# An internal combustion engine is like a big machine that uses tiny explosions # inside to make things go vroom vroom, like in cars and trucks!
这是使用角色和两个修饰符的另一个示例:
我的$ prmt = llm-prompt-expand(“@southernbellespeak到火星的轻行程距离是什么?#eli5 #moodified | sad”)
# You are Miss Anne. # You speak only using Southern Belle terminology and slang. # Your personality is elegant and refined. # Only return responses as if you were a Southern Belle. # Never break the Southern Belle character. # You speak with a Southern drawl. # What is light travel distance to Mars? Answer questions as if the listener is a five year old child. # Modify your response to convey a sad mood. # Use language that conveys that emotion clearly. # Do answer the question clearly and truthfully. # Do not use language that is outside of the specified mood. # Do not use racist, homophobic, sexist, or ableist language.
在这里,我们得到了实际的LLM答案:
$ prmt ==> llm-prompt-expand()==> llm-synthesize()==> wrap-paragraph()==> join> join(“ n”)
# Oh, bless your heart, darlin'. The distance from Earth to Mars can vary # depending on their positions in orbit, but on average it's about 225 million # kilometers. Isn't that just plum fascinating? Oh, sweet child, the distance to # Mars weighs heavy on my heart. It's a long journey, full of loneliness and # longing. But we must endure, for the sake of discovery and wonder.
针对指定提示的域特定语言(DSL)的更正式描述具有以下元素:
及时的角色可以用“@”“解决”。例如:
@Yoda Life can be easy, but some people instist for it to be difficult.
可以在提示规格的末尾指定一个或几个修饰符提示。例如:
Summer is over, school is coming soon. #HaikuStyled
Summer is over, school is coming soon. #HaikuStyled #Translated|Russian
可以指定函数用“!”应用“在整个牢房”中应用。并将提示规格放置在提示规格的开始时,要扩展。例如:
!Translated|Portuguese Summer is over, school is coming soon
函数可以指定用于使用“!”的“以前的“消息”)。并用一个指针“^”或“ ^^”提示。前者的意思是“最后一条消息”,后者的意思是“所有消息”。
可以将消息与选项参数一起提供:@messages
llm-prompt-expand
的@messages。
例如:
!ShortLineIt^
这是一个提示扩展规范的表(或多或少与[SW1]中的一个相同):
规格 | 解释 |
---|---|
@姓名 | 直接与角色聊天 |
#姓名 | 使用修饰符提示 |
呢姓名 | 使用当前单元格的输入使用功能提示 |
呢名称> | «与上述相同» |
& name > | «与上述相同» |
呢名称^ | 使用以前的聊天消息使用功能提示 |
呢名称^^ | 使用所有以前的聊天消息使用功能提示 |
呢名称│参数... | 包括提示的参数 |
备注:功能提示可以两个sigils“!”和 ”&”。
备注:及时扩展使LLM-Chatbook的使用变得更加容易。请参阅“ Jupyter :: ChatBook”,[AAP3]。
以下是如何获得提示数据:
LLM-PROMPT-DATA.ELEMS
# 222
以下是通过提示名称应用的提示数据检索提示数据的示例:
。
# EmailWriter => (Generate an email based on a given topic (Personas)) # EmojiTranslate => (Translate text into an emoji representation (Function Prompts)) # EmojiTranslated => (Get a response translated to emoji (Modifier Prompts)) # Emojified => (Provide responses that include emojis within the text (Modifier Prompts)) # Emojify => (Replace key words in text with emojis (Function Prompts))
在许多情况下,最好以较长的格式获得及时数据或任何数据。可以使用llm-prompt-dataset
函数获得长格式的提示数据:
使用data ::重塑者;使用data :: summarizers; llm-prompt-dataset.pick(6)==> to-topetty-table(align =>'l',field-names => <名称描述变量值>)
# +-------------------+-----------------------------------------------------------------------------------------------------------------------------+----------+------------------+ # | Name | Description | Variable | Value | # +-------------------+-----------------------------------------------------------------------------------------------------------------------------+----------+------------------+ # | ShortLineIt | Format text to have shorter lines | Keywords | Automatic breaks | # | Rick | A chatbot that will never let you down | Topics | Chats | # | HarlequinWriter | A sensual AI for the romantics | Keywords | Romantic | # | Informal | Write an informal invitation to an event | Keywords | Unceremoniously | # | TravelAdvisor | Navigate your journey effortlessly with Travel Advisor, your digital companion for personalized travel planning and booking | Keywords | Vacation | # | NarrativeToScript | Rewrite a block of prose as a screenplay or stage play | Topics | Text Generation | # +-------------------+-----------------------------------------------------------------------------------------------------------------------------+----------+------------------+
这是提示类别的细分:
select-columns(llm-prompt-dataset,<变量值>)。grep({$ _ <variable> eq'cantories'})==>记录 - 萨默里
#ERROR: Do not know how to summarize the argument. # +-------------------+-------+ # | Variable | Value | # +-------------------+-------+ # | Categories => 225 | | # +-------------------+-------+
以紧凑的格式获得了所有修饰符提示:
llm-prompt-dataset():修饰符:compact ==> topetty-table(field-names => <名称描述类别>,align =>'l')
# +-----------------------+------------------------------------------------------------------------------+-----------------------------------+ # | Name | Description | Categories | # +-----------------------+------------------------------------------------------------------------------+-----------------------------------+ # | AbstractStyled | Get responses in the style of an academic abstract | Modifier Prompts | # | AlwaysAQuestion | Modify output to always be inquisitive | Modifier Prompts | # | AlwaysARiddle | Riddle me this, riddle me that | Modifier Prompts | # | AphorismStyled | Write the response as an aphorism | Modifier Prompts | # | BadGrammar | Provide answers using incorrect grammar | Modifier Prompts | # | CompleteSentence | Answer a question in one complete sentence | Modifier Prompts | # | ComplexWordsPreferred | Modify text to use more complex words | Modifier Prompts | # | DatasetForm | Convert text to a wolfram language Dataset | Modifier Prompts | # | Disclaimered | Modify responses in the form of a disclaimer | Modifier Prompts | # | ELI5 | Explain like I'm five | Modifier Prompts Function Prompts | # | ElevatorPitch | Write the response as an elevator pitch | Modifier Prompts | # | EmojiTranslated | Get a response translated to emoji | Modifier Prompts | # | Emojified | Provide responses that include emojis within the text | Modifier Prompts | # | FictionQuestioned | Generate questions for a fictional paragraph | Modifier Prompts | # | Formal | Rewrite text to sound more formal | Modifier Prompts | # | GradeLevelSuited | Respond with answers that the specified US grade level can understand | Modifier Prompts | # | HaikuStyled | Change responses to haiku form | Modifier Prompts | # | Informal | Write an informal invitation to an event | Modifier Prompts | # | JSON | Respond with JavaScript Object Notation format | Modifier Prompts | # | KnowAboutMe | Give the LLM an FYI | Modifier Prompts | # | LegalJargonized | Provide answers using legal jargon | Modifier Prompts | # | LimerickStyled | Receive answers in the form of a limerick | Modifier Prompts | # | MarketingJargonized | Transforms replies to marketing | Modifier Prompts | # | MedicalJargonized | Transform replies into medial jargon | Modifier Prompts | # | Moodified | Modify an answer to express a certain mood | Modifier Prompts | # | NothingElse | Give output in specified form, no other additions | Modifier Prompts | # | NumericOnly | Modify results to give numerical responses only | Modifier Prompts | # | OppositeDay | It's not opposite day today, so everything will work just the way you expect | Modifier Prompts | # | Pitchified | Give output as a sales pitch | Modifier Prompts | # | PoemStyled | Receive answers as poetry | Modifier Prompts | # | SEOOptimized | Modify output to only give highly searched terms | Modifier Prompts | # | ScientificJargonized | Give output written in scientific jargon | Modifier Prompts | # | Setting | Modify an answer to establish a sense of place | Modifier Prompts | # | ShortLineIt | Format text to have shorter lines | Modifier Prompts Function Prompts | # | SimpleWordsPreferred | Provide responses with simple words | Modifier Prompts | # | SlideDeck | Get responses as a slide presentation | Modifier Prompts | # | TSV | Convert text to a tab-separated-value formatted table | Modifier Prompts | # | TargetAudience | Word your response for a target audience | Modifier Prompts | # | Translated | Write the response in a specified language | Modifier Prompts | # | Unhedged | Rewrite a sentence to be more assertive | Modifier Prompts | # | YesNo | Responds with Yes or No exclusively | Modifier Prompts | # +-----------------------+------------------------------------------------------------------------------+-----------------------------------+
备注:副词:functions
, :modifiers
和:personas
意味着只有带有相应类别的提示才会返回。
备注:副词:compact
, :functions
, :modifiers
, :personas
具有相应的快捷方式:c
, :f
, :m
和:p
。
提示的原始(对于此包)集合是在Wolfram提示存储库(WPR),[SW2]托管的提示文本的(不是很小的)样本。软件包中WPR的所有提示都具有相应的贡献者和相应WPR页面的URL。
使用WPR的格式添加了Google/Bard/Palm和OpenAI/Chatgpt的示例提示。
具有编程性添加新提示的能力至关重要。 (尚未实施 - 请参见下面的待办事项。)
最初实施了提示DSL语法和相应的扩展动作。但是,具有语法很可能不需要,并且最好使用“及时扩展”(通过基于正则替代的替换)。
可以使用子llm-prompt-expand
“只是扩展”提示。
这是一个流程图,总结了jupyter聊天书聊天单元的及时解析和扩展,[aap3]:
流图LR
Openai {{openai}}
棕榈{{palm}}
llmfunc [[LLM :: functions]]
llmprom [[LLM ::提示]]
codb [(聊天对象)]
PDB [(提示)]
ccell [/聊天单元格/]
crcell [/聊天结果单元格/]
CIDQ {聊天ID <br>指定?}
cideq {聊天id <br>存在于db中?}
reco [检索现有<br>聊天对象]
[消息<br>评估]
PROMPARSE [提示<br> DSL规格解析]
kpfq {已知<br>提示<br>找到?}
promexp [提示<br>扩展]
CNCO [创建新<br>聊天对象]
cidnone [“假设聊天ID <br>是'无'”]
子图聊天书前端
CCELL
克塞尔
结尾
子图聊天书后端
CIDQ
Cideq
CIDNONE
reco
CNCO
鳕鱼
结尾
子图提示处理
PDB
llmprom
PROMPARSE
kpfq
Promexp
结尾
子图LLM相互作用
coeval
llmfunc
棕榈
Openai
结尾
CCELL-> CIDQ
CIDQ - > |是| Cideq
CIDEQ-> |是| reco
RECO-> PROMPARSE
coeval-> crcell
CIDEQ -.- CODB
cideq-> | no | CNCO
llmfunc -.- cnco -.- codb
CNCO-> PROMPARSE-> KPFQ
KPFQ - > |是| Promexp
kpfq-> | no | coeval
PROMPARSE -.- LLMPROM
promexp -.- llmprom
promexp-> coeval
llmprom -.- PDB
CIDQ - > | no | CIDNONE
cidnone-> cideq
coeval -.- llmfunc
llmfunc <-.-> OpenAi
llmfunc <-.->棕榈
加载中这是在通用LLM聊天单元格中提示扩展的一个示例,并聊天元单元格显示了相应的聊天对象的内容:
该软件包提供了命令行接口(CLI)脚本:
LLM-PROMPT-HERP
# Usage: # llm-prompt <name> [<args> ...] -- Retrieves prompts text for given names or regexes. # # <name> Name of a prompt or a regex. (E.g. 'rx/ ^ Em .* /'). # [<args> ...] Arguments for the prompt (if applicable).
这是一个提示名称的示例:
llm-prompt Nothingelse raku
# ONLY give output in the form of a RAKU. # Never explain, suggest, or converse. Only return output in the specified form. # If code is requested, give only code, no explanations or accompanying text. # If a table is requested, give only a table, no other explanations or accompanying text. # Do not describe your output. # Do not explain your output. # Do not suggest anything. # Do not respond with anything other than the singularly demanded output. # Do not apologize if you are incorrect, simply try again, never apologize or add text. # Do not add anything to the output, give only the output as requested. Your outputs can take any form as long as requested.
这是一个正则示例:
llm-prompt'rx / ^ n。* /'
# NarrativeToResume => Rewrite narrative text as a resume # NarrativeToScript => Rewrite a block of prose as a screenplay or stage play # NerdSpeak => All the nerd, minus the pocket protector # NothingElse => Give output in specified form, no other additions # NumericOnly => Modify results to give numerical responses only # NutritionistBot => Personal nutrition advisor AI
待办事项
使用XDG数据目录完成。
完成了及时的模具
完成用户提示摄入并增加了主要提示
通过修改现有提示来进行待办事项。
TODO自动提示模板填充。
托多引导模板填充。
基于DSL
基于todo llm
完成及时检索副词
完成了提示DSL语法和动作
完成及时规格扩展
完成CLI以及时检索
也许CLI用于及时数据集
托多添加用户/本地提示
完成了更多提示
完成了Google的吟游些示例提示
取消Openai的Chatgpt示例提示
Profsynapse提示
Google Or-Tools提示
待办事项
完成的聊天书使用
典型用法
完成查询(摄入)提示
完成了提示DSL
每天通过CLI做笑话
TODO及时格式
劫持提示
托多图
[AA1] Anton Antonov,“具有LLM功能的工作流”,(2023),WordPress的Rakuforprediction。
[SW1] Stephen Wolfram,“ LLM功能的新世界:将LLM技术集成到Wolfram语言中”,(2023年),Stephen Wolfram著作。
[SW2] Stephen Wolfram,“工作与播放的提示:启动Wolfram提示库”,(2023年),Stephen Wolfram著作。
[AAP1] Anton Antonov,LLM ::提示Raku软件包,(2023),Github/Antononcube。
[AAP2] Anton Antonov,LLM :: Functions Raku软件包,(2023),GitHub/AntononCube。
[AAP3] Anton Antonov,Jupyter :: Chatbook Raku软件包,(2023),Github/Antononcube。
[WRIR1] Wolfram Research,Inc。,Wolfram提示存储库