The magical Prompt project?♂️
This repository contains a hand-screened Prompt engineering resource, focusing on Generative Pretrained Transformer (GPT), ChatGPT, PaLM, and more.
Table of contents
- paper
- Tools and code
- API
- Dataset
- Model
- AI content detector
- educate
- video
- books
- community
- How to contribute
paper
?
Prompt Engineering Technology :
- Method to enhance ChatGPT Prompt project using Prompt Pattern directory [2023] (Arxiv)
- Gradient-based discrete optimization for prompt fine-tuning and discovery [2023] (Arxiv). - General tip: Demo of generating thought chains for large language models [2023] (Arxiv)
- Progressive prompts: Continuous learning of language models [2023] (Arxiv)
- Batch Processing Tips: Efficient Inference with LLM API [2023] (Arxiv)
- Continuous prompts to solve complex problems [2022] (Arxiv)
- Structural Tips: Scaling Contextual Learning to 1,000 Examples [2022] (Arxiv)
- Large language models are human-level prompt engineers [2022] (Arxiv)
- Ask me anything: Simple strategies for prompting language models [2022] (Arxiv)
- Tips for GPT-3 to be reliable [2022] (Arxiv)
- Breaking Tips: Modular Approaches to Solving Complex Tasks [2022] (Arxiv)
- PromptChainer: Chaining large language model prompts via visual programming [2022] (Arxiv)
- Investigating hint engineering in diffusion models [2022] (Arxiv)
- Show your work: A rough draft of intermediate computations using language models [2021] (Arxiv)
- Teaching tips for reimagining GPTk [2021] (Arxiv)
- Wonderfully ordered cues and their discovery: Overcoming small sample cue order sensitivity [2021] (Arxiv)
- The power of scale for efficient prompt tuning of parameters [2021] (Arxiv)
- Programming large language models: Beyond the few-shot paradigm [2021] (Arxiv) - Prefix-Tuning: Optimizing continuous hints for generation [2021] (Arxiv)
Reasoning and contextual learning :
- Multimodal thinking chain reasoning in language models [2023] (Arxiv)
- On second thought, we don’t take the path of single-step thinking! Bias and Harmfulness in Zero-shot Inference [2022] (Arxiv)
- ReAct: Synergy of Reasoning and Action in Language Models [2022] (Arxiv)
- Language models are greedy reasoners: a systematic formal analysis of thought chains [2022] (Arxiv)
- Progress on making language models better for inference [2022] (Arxiv)
- Large language models are zero-shot reasoners [2022] (Arxiv)
- Reasoning like a program executor [2022] (Arxiv)
- Self-consistency improves thought chain reasoning in language models [2022] (Arxiv)
- Rethinking the role of demonstration: What makes contextual learning work? [2022] (Arxiv)
- Learn to explain: Multimodal reasoning for scientific Q&A through thought chains [2022] (Arxiv)
- Thought Chain prompts for eliciting inference in large language models [2021] (Arxiv)
- Generating knowledge prompts for commonsense reasoning [2021] (Arxiv)
- BERTese: Learn to communicate with BERT [2021] (Acl)
Evaluate and improve language models :
- Large language models are prone to interference from irrelevant context [2023] (Arxiv)
- Crawling the internal knowledge base of language models [2023] (Arxiv) - Methods for discovering language model behavior: Evaluation of model writing [2022] (Arxiv) Original link
- Calibrate before use: Improving the few-shot performance of language models [2021] (Arxiv) Original link
Application of language model :
- Tips for classifying multimodal malicious memes [2023] (Arxiv) Original link
- Prompt Language Model for Social Conversation Synthesis [2023] (Arxiv) Original link
- Common sense-aware prompts for controlled empathic conversation generation [2023] (Arxiv) Original link
- Program-Assisted Language Model [2023] (Arxiv) Original link
- Legal prompt writing for multilingual legal judgment prediction [2023] (Arxiv) Original link
- Research on prompt engineering for solving CS1 problems using natural language [2022] (Arxiv) Original link
- Plot creation using pre-trained language models [2022] (Acl) Original link
- AutoPrompt: Using automatically generated prompts to elicit knowledge from language models [2020] (Arxiv) Original link
Threat detection and countermeasures examples :
- Constitutional Artificial Intelligence: Harmless via AI Feedback [2022] (Arxiv) Original link
- Ignore the previous tip: Attack Techniques for Language Models [2022] (Arxiv) Original link
- Machine-generated text: A comprehensive survey of threat models and detection methods [2022] (Arxiv) Original link
- Evaluating the susceptibility of pre-trained language models via handcrafted adversarial examples [2022] (Arxiv) Original link
- Toxicity detection using generated hints [2022] (Arxiv) Original link. - How do we know what the language model knows? [2020] (Mit)
Few-shot learning and performance optimization :
- Promptagator: Few-shot dense retrieval from 8 examples [2022] (Arxiv)
- Few-Shot prompts for interpretive unreliability in textual reasoning [2022] (Arxiv)
- Making pre-trained language models better few-shot learners [2021] (Acl)
- Language models are few-shot learners [2020] (Arxiv)
Text to image generation :
- A hint modifier classification for text-to-image generation [2022] (Arxiv)
- Design Guidelines for Prompt Engineering Text-to-Image Generative Models [2021] (Arxiv)
- High-resolution image synthesis using latent diffusion models [2021] (Arxiv)
- DALL·E: Creating images from text [2021] (Arxiv)
Text to music/sound generation :
- MusicLM: Generating music from text [2023] (Arxiv)
- ERNIE-Music: Text-to-wave music generation using diffusion models [2023] (Arxiv)
- Noise2Music: Text-modulated music generation using diffusion models [2023) (Arxiv)
- AudioLM: An audio generation method based on language modeling [2023] (Arxiv)
- Make-An-Audio: Text-to-audio generation using enhanced cue diffusion models [2023] (Arxiv)
Text to video generation :
- Dreamix: A video diffusion model for a universal video editor [2023] (Arxiv). - Tuning Video: One-shot tuning of image diffusion models for text-to-video generation [2022] (Arxiv)
- Noise to music: text-conditional music generation based on diffusion model [2023] (Arxiv)
- Audio LM: A method for language model generation of audio [2023] (Arxiv)
Overview :
- Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic? [2022] (Arxiv)
Tools and code
?
name | describe | Link |
---|
GPT Index | GPT Index is a project consisting of a set of data structures designed to make working with large external knowledge bases with LLM easier. | [Github] |
Promptify | Use LLM to solve NLP problems, and use Promptify to easily generate different NLP task prompts for popular generative models such as GPT and PaLM. | [Github] |
Better Prompt | Test the test suite before pushing LLM prompts to production | [Github] |
Interactive Composition Explorerx | ICE is a trace visualizer for Python libraries and language model programs. | [Github] |
LangChainx | Build applications using LLM in combination | [Github] |
OpenPrompt | An open source framework for prompt learning | [Github] |
Prompt Engine | This repository contains a library of NPM utilities for creating and maintaining large language model (LLMs) prompts. | [Github] |
Prompts AI | Advanced workplace for GPT-3 | [Github] |
Prompt Source | PromptSource is a toolkit for creating, sharing, and using natural language prompts. | [Github] |
ThoughtSource | A framework for machines to think about science | [Github] |
Api
name | describe | URL | Paid or open source |
---|
OpenAI | GPT-n for natural language tasks, Codex for translating natural language into code, and DALL·E for creating and editing raw images. | [OpenAI] | Pay |
CohereAI | Cohere provides access to advanced large-scale language models and natural language processing tools through an API. | [CohereAI] | Pay |
Anthropic | coming soon | [Anthropic] | Pay |
FLAN-T5XXL | coming soon | [HugginFace] | Open source |
Dataset
?
name | describe | URL |
---|
P3 (Public Pool of Prompts) | P3 (Public Pool of Prompts) is a collection of prompted English data sets containing various NLP tasks. | [HuggingFace] |
Awesome ChatGPT Prompts | This repository includes a ChatGPT prompt curation for better use of ChatGPT. | [Github] |
?
name | describe | Link |
---|
ChatGPT | ChatGPT | [OpenAI] |
Codex | The Codex model is a descendant of our GPT-3 model and can understand and generate code. Its training data includes natural language and billions of lines of public code on GitHub | [Github] |
Bloom | BigScience Large Open Science Open Access Multilingual Language Model | [HuggingFace] |
Facebook LLM | OPT-175B is a Meta-trained GPT-3 equivalent model. It is the largest pretrained language model currently available, with 175 billion parameters | [Alpa] |
GPT-NeoX | GPT-NeoX-20B, a trained 20 billion parameter autoregressive language model | [HuggingFace] |
FLAN-T5XXL | Flan-T5 is a command-tuned model, meaning it exhibits zero firing behavior when commands are given as part of a cue. | [HuggingFace/Google] |
XLM-RoBERTa-XL | The XLM-RoBERTa-XL model is pre-trained on 2.5TB of filtered CommonCrawl data, containing 100 languages | [HuggingFace] |
GPT-J | It is a causal language model similar to GPT-2 trained on the Pile data set | [HuggingFace] |
| Writing Prompts | A large dataset of 300K individually written stories and writing prompts scraped from online forums (reddit) | [Kaggle] | | Midjourney Prompts | Text prompts and image URLs scraped from MidJourney's public Discord server | [HuggingFace] || PaLM-rlhf-pytorch | Implementation of RLHF (Reinforcement Learning with Human Feedback) in the PaLM architecture. Basically ChatGPT, but with the addition of PaLM | [Github] | | GPT-Neo | An implementation of GPT-2 and GPT-3-like models using the grid tensor library for model parallelism. | [Github] | | LaMDA-rlhf-pytorch | Google's open source pre-trained implementation of LaMDA, using PyTorch. Added RLHF similar to ChatGPT. | [Github] | | RLHF | An implementation of reinforcement learning through human feedback | [Github] | | GLM-130B | GLM-130B: An open bilingual pre-trained model | [Github] |
AI content detector
?
name | describe | URL |
---|
AI text classifier | The AI Text Classifier is a finely tuned GPT model that can predict the likelihood that a piece of text is AI-generated from various sources such as ChatGPT. | [OpenAI] |
GPT-2 output detector | This is an online demonstration of RoBERTa implemented based on ?/Transformers | [HuggingFace] |
Openai detector | AI classifier for indicating text written by AI (OpenAI Detector Python wrapper) | [GitHub] |
Tutorial
Introduction to Prompt project
- Prompt Engineering 101 - Introduction and Resources
- Prompt Engineering 101". - Prompt Engineering Guide by SudalaiRajkumar
A Beginner's Guide to Generative Language Models
- A beginner-friendly guide to generative language models - LaMBDA Guide
- Generative AI based on Cohere: Part 1 - Model Tips
Best Practices for Prompt Engineering
- Best Practices for OpenAI API Prompt Engineering
- How to write good prompts
Complete Prompt Project Guide
- A complete introduction to the large language model Prompt project
- Prompt Engineering Guide: How to Design the Best Prompts
Prompt Technical aspects of engineering
- Three major principles of GPT-3 Prompt engineering
- Common framework for ChatGPT Prompt projects
- Prompt programming method
Prompt project resources
- Awesome ChatGPT Tips
- Best 100+ Stable Diffusion Prompt
- DALLE PROMPT BOOKS
- OpenAI Cookbook
- Microsoft's Prompt project
video
?- Advanced version of ChatGPT Prompt project
- ChatGPT: 5 Prompt Engineering Tips for Beginners
- CMU Advanced Natural Language Processing 2022: Prompting
- Prompt Engineering - a new career?
- ChatGPT Guide: Use Better Prompts to Boost Your Results 10x
- Language Models and Prompt Engineering: A Systematic Survey of Prompting Methods in NLP
- Prompt Engineering 101: Autocomplete, zero-sample, single-sample and few-sample prompts
Community
?
- OpenAI Discord
- PromptsLab Discord
- Learn Prompting
- r/ChatGPT Discord
- MidJourney Discord
How to contribute
We welcome contributions to this list! In fact, that's the main reason I created it - to encourage contributions and encourage people to subscribe to changes to stay up to date on new and exciting developments in the field of large language models (LLMs) and Prompt engineering.
Before contributing, please take a moment to review our contribution guidelines. These guidelines will help ensure that your contributions are consistent with our goals and meet our standards of quality and relevance. Thank you for your interest in contributing to this project!
Image source: docs.cohere.ai