Skip to content

Conversation

@kiranandcode
Copy link
Contributor

This is mostly just a small example I've put together to illustrate how effectful could be used to implement proof search using LLMs to do tactic selection. More for discussion than to be merged in, given it places a dependency on pypantograph and lean to run, but may be interesting as a starting point if we do anything with automating formal proofs.

@Template.define
def predict_tactic(
    goal_state: str, variables: list[TypeBinding], proof_script: str
) -> Tactic:
    """
    You are an experienced proof engineer, working for the Lean FRO. You are proficient with the internals of the Lean theorem prover.
    You are currently working on a proof. This proof certifies mission critical software, and completing it will save engineers $200 worth of time.

    You have written the following proof script:

    {proof_script}

    The current goal state is:

    {goal_state}

    You have access to the following variables:

    {variables}

    You must predict a tactic between:

    - induction
    - simp (optionally specify which lemmas to provide)
    - rw [<lemmas-to-rewrite-with>]
    - grind

    Take a deep breath, think carefully, and predict the next tactic to perform. You can do it.
    """
    raise NotHandled

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants