The tests are all in the test/ folder, so I used my files-to-prompt tool to turn every .py file in that folder into a single prompt, using the XML-ish format that Claude likes (the -c option):
Terminal window
1
files-to-promptwasmtime-py/tests-epy-c
Piping that through ttok shows that it’s about 34,780 OpenAI tokens. I pasted the whole thing ( | pbcopy to copy to my clipboard) into my Claude token counter tool and got 43,490 - easily enough to fit Claude 3.5 Sonnet’s 200,000 limit.
So I ran this:
Terminal window
1
files-to-prompt-epywasmtime-py/tests-c|\
2
llm-mclaude-3.5-sonnet-s\
3
'write detailed usage documentation including realistic examples'
I’m using llm-claude-3 here and a system prompt of:
write detailed usage documentation including realistic examples
You can see the full transcript here, saved using llm logs -c | pbcopy (I then hand-edited in a <details><summary> to hide the lengthy piped input). As always I’m sharing the transcript in a private Gist to keep this AI-generated, unverified text from being indexed by search engines.
At the end of the first chunk of output Claude offered the following:
This documentation covers the core functionality. The bindings also support more advanced features like:
Component model and interface types
Resource types and references
Custom linking and importing
Memory management controls
Execution limits and interruption
Let me know if you would like me to expand on any of these topics!
So I followed up with another prompt (using llm -c for “continue current conversation”):
Terminal window
1
llm-c'write a detailed section about memory management and one about execution limits'
How good is this documentation? It’s pretty solid! The only thing it had to go on was the content of those tests, so I can be reasonably confident it didn’t make any glaringly terrible mistakes and that the examples it gave me are more likely than not to execute.
Someone with more depth of experience with the project than me could take this as an initial draft and iterate on it to create verified, generally useful documentation.
Generating documentation from tests using files-to-prompt and LLM