TIL: How LangChain makes it easy to interact with LLMs

Sun 17 September 2023

A summary of my studies

  • LangChain provides a "sensible" framework that does a lot of heavy lifting for us - LangChain: Build AI apps with LLMs through composability
  • Seems like it's also a widely adopted (across different tech companies) approach for the last few months at least
  • Two main advantages backend devs, it can directly integrate;
    • with your database with a SQLChain
    • with your API with an APIChain. It's easy enough that if you provide it with a standard OpenAPI schema, it's able to understand which API to call and fetch data from to answer the question
  • You can think of it like writing a step-by-step, repeatable flow for interacting with an LLM in a very iteration friendly way (you can build on your prompts and templates quite easily) - that is, composing a bunch of different prompts/instructions and chaining them together into a desirable outcome. What it does boil down to, at a super high-level, is a DAG
  • One thing to note, it's not restricted to OpenAI, but rather can be tweaked to work with other models as well - for instance, open source models available on hugging face.

Counter-points

LangChain is essentially an abstraction over how you'd normally query LLMs - so while it doesn't do anything fancy (and in rare cases, cannot be used for super customised integrations), it does get the job done in most application-specific use cases. Some rants on Hacker News on why it "sucks":

Case:

  • Klarna has a nice documentation and implementation of this (LangChain refers to this in their docs)
References: