Llm prompt langchain. # Caching supports newer chat models as well.