使用LLMS实施策略模式。
另外,请参阅https://blog.blackhc.net/2022/12/llm_software_engineering/,以了解为什么将来这可能很重要。
该软件包添加了连接到LLM(例如OpenAI的GPT-3)的Decorator llm_strategy
,并使用LLM在接口类中“实现”抽象方法。它通过将请求转发到LLM并使用Python的@dataclasses
转换回Python数据来实现此目的。
它使用DOC字符串,类型注释和方法/功能名称作为LLM的提示,并可以自动将结果转换回Python类型(目前仅支持@dataclasses
)。它还可以提取数据模式以发送到LLM进行解释。尽管llm-strategy
软件包仍然依赖一些Python代码,但它有可能通过使用额外的,更便宜的LLMS自动化结构化数据的解析来减少对此代码的需求。
最新版本还包括一个用于从LLM的超参数跟踪和收集痕迹的软件包。
例如,这允许进行元优化。有关使用仿制药的简单实现,请参见示例/研究。
您可以找到一个示例wandb跟踪: https://wandb.ai/blackhc/blackboard-pagi/reports/meta-optimization-example-trace-vmlld ZO3MDMXODEZ?accessToken = p9hubfskmq1z5yj1uz7wx1idh304diiernp7pjlrjrybpaozlwv3dnitjt7vni1j
使用仿制药炫耀模式的提示很简单:
T_TaskParameters = TypeVar ( "T_TaskParameters" )
T_TaskResults = TypeVar ( "T_TaskResults" )
T_Hyperparameters = TypeVar ( "T_Hyperparameters" )
class TaskRun ( GenericModel , Generic [ T_TaskParameters , T_TaskResults , T_Hyperparameters ]):
"""
The task run. This is the 'data' we use to optimize the hyperparameters.
"""
task_parameters : T_TaskParameters = Field (..., description = "The task parameters." )
hyperparameters : T_Hyperparameters = Field (
...,
description = "The hyperparameters used for the task. We optimize these." ,
)
all_chat_chains : dict = Field (..., description = "The chat chains from the task execution." )
return_value : T_TaskResults | None = Field (
..., description = "The results of the task. (None for exceptions/failure.)"
)
exception : list [ str ] | str | None = Field (..., description = "Exception that occurred during the task execution." )
class TaskReflection ( BaseModel ):
"""
The reflections on the task.
This contains the lessons we learn from each task run to come up with better
hyperparameters to try.
"""
feedback : str = Field (
...,
description = (
"Only look at the final results field. Does its content satisfy the "
"task description and task parameters? Does it contain all the relevant "
"information from the all_chains and all_prompts fields? What could be improved "
"in the results?"
),
)
evaluation : str = Field (
...,
description = (
"The evaluation of the outputs given the task. Is the output satisfying? What is wrong? What is missing?"
),
)
hyperparameter_suggestion : str = Field (
...,
description = "How we want to change the hyperparameters to improve the results. What could we try to change?" ,
)
hyperparameter_missing : str = Field (
...,
description = (
"What hyperparameters are missing to improve the results? What could "
"be changed that is not exposed via hyperparameters?"
),
)
class TaskInfo ( GenericModel , Generic [ T_TaskParameters , T_TaskResults , T_Hyperparameters ]):
"""
The task run and the reflection on the experiment.
"""
task_parameters : T_TaskParameters = Field (..., description = "The task parameters." )
hyperparameters : T_Hyperparameters = Field (
...,
description = "The hyperparameters used for the task. We optimize these." ,
)
reflection : TaskReflection = Field (..., description = "The reflection on the task." )
class OptimizationInfo ( GenericModel , Generic [ T_TaskParameters , T_TaskResults , T_Hyperparameters ]):
"""
The optimization information. This is the data we use to optimize the
hyperparameters.
"""
older_task_summary : str | None = Field (
None ,
description = (
"A summary of previous experiments and the proposed changes with "
"the goal of avoiding trying the same changes repeatedly."
),
)
task_infos : list [ TaskInfo [ T_TaskParameters , T_TaskResults , T_Hyperparameters ]] = Field (
..., description = "The most recent tasks we have run and our reflections on them."
)
best_hyperparameters : T_Hyperparameters = Field (..., description = "The best hyperparameters we have found so far." )
class OptimizationStep ( GenericModel , Generic [ T_TaskParameters , T_TaskResults , T_Hyperparameters ]):
"""
The next optimization steps. New hyperparameters we want to try experiments and new
task parameters we want to evaluate on given the previous experiments.
"""
best_hyperparameters : T_Hyperparameters = Field (
...,
description = "The best hyperparameters we have found so far given task_infos and history." ,
)
suggestion : str = Field (
...,
description = (
"The suggestions for the next experiments. What could we try to "
"change? We will try several tasks next and several sets of hyperparameters. "
"Let's think step by step."
),
)
task_parameters_suggestions : list [ T_TaskParameters ] = Field (
...,
description = "The task parameters we want to try next." ,
hint_min_items = 1 ,
hint_max_items = 4 ,
)
hyperparameter_suggestions : list [ T_Hyperparameters ] = Field (
...,
description = "The hyperparameters we want to try next." ,
hint_min_items = 1 ,
hint_max_items = 2 ,
)
class ImprovementProbability ( BaseModel ):
considerations : list [ str ] = Field (..., description = "The considerations for potential improvements." )
probability : float = Field (..., description = "The probability of improvement." )
class LLMOptimizer :
@ llm_explicit_function
@ staticmethod
def reflect_on_task_run (
language_model ,
task_run : TaskRun [ T_TaskParameters , T_TaskResults , T_Hyperparameters ],
) -> TaskReflection :
"""
Reflect on the results given the task parameters and hyperparameters.
This contains the lessons we learn from each task run to come up with better
hyperparameters to try.
"""
raise NotImplementedError ()
@ llm_explicit_function
@ staticmethod
def summarize_optimization_info (
language_model ,
optimization_info : OptimizationInfo [ T_TaskParameters , T_TaskResults , T_Hyperparameters ],
) -> str :
"""
Summarize the optimization info. We want to preserve all relevant knowledge for
improving the hyperparameters in the future. All information from previous
experiments will be forgotten except for what this summary.
"""
raise NotImplementedError ()
@ llm_explicit_function
@ staticmethod
def suggest_next_optimization_step (
language_model ,
optimization_info : OptimizationInfo [ T_TaskParameters , T_TaskResults , T_Hyperparameters ],
) -> OptimizationStep [ T_TaskParameters , T_TaskResults , T_Hyperparameters ]:
"""
Suggest the next optimization step.
"""
raise NotImplementedError ()
@ llm_explicit_function
@ staticmethod
def probability_for_improvement (
language_model ,
optimization_info : OptimizationInfo [ T_TaskParameters , T_TaskResults , T_Hyperparameters ],
) -> ImprovementProbability :
"""
Return the probability for improvement (between 0 and 1).
This is your confidence that your next optimization steps will improve the
hyperparameters given the information provided. If you think that the
information available is unlikely to lead to better hyperparameters, return 0.
If you think that the information available is very likely to lead to better
hyperparameters, return 1. Be concise.
"""
raise NotImplementedError ()
from dataclasses import dataclass
from llm_strategy import llm_strategy
from langchain . llms import OpenAI
@ llm_strategy ( OpenAI ( max_tokens = 256 ))
@ dataclass
class Customer :
key : str
first_name : str
last_name : str
birthdate : str
address : str
@ property
def age ( self ) -> int :
"""Return the current age of the customer.
This is a computed property based on `birthdate` and the current year (2022).
"""
raise NotImplementedError ()
@ dataclass
class CustomerDatabase :
customers : list [ Customer ]
def find_customer_key ( self , query : str ) -> list [ str ]:
"""Find the keys of the customers that match a natural language query best (sorted by closeness to the match).
We support semantic queries instead of SQL, so we can search for things like
"the customer that was born in 1990".
Args:
query: Natural language query
Returns:
The index of the best matching customer in the database.
"""
raise NotImplementedError ()
def load ( self ):
"""Load the customer database from a file."""
raise NotImplementedError ()
def store ( self ):
"""Store the customer database to a file."""
raise NotImplementedError ()
@ llm_strategy ( OpenAI ( max_tokens = 1024 ))
@ dataclass
class MockCustomerDatabase ( CustomerDatabase ):
def load ( self ):
self . customers = self . create_mock_customers ( 10 )
def store ( self ):
pass
@ staticmethod
def create_mock_customers ( num_customers : int = 1 ) -> list [ Customer ]:
"""
Create mock customers with believable data (our customers are world citizens).
"""
raise NotImplementedError ()
有关完整示例,请参见示例/customer_database_search.py。
首先克隆存储库。然后,使用
make install
当您打开拉动请求,合并到main或创建新版本时,CI/CD管道将触发。
要最终确定发布到PYPI或文物的设置,请参见此处。要用MKDOC激活自动文档,请参见此处。要启用代码覆盖报告,请参见此处。
PYPI_TOKEN
名称的项目秘密中。*.*.*
创建一个新标签。有关更多详细信息,请参见此处。
用fpgmaas/cookiecutter poetry发起的存储库。