A summary of my studies
LangChain
provides a "sensible" framework that does a lot of heavy lifting for us - LangChain: Build AI apps with LLMs through composability- Seems like it's also a widely adopted (across different tech companies) approach for the last few months at least
- Two main advantages backend devs, it can directly integrate;
- You can think of it like writing a step-by-step, repeatable flow for interacting with an LLM in a very iteration friendly way (you can build on your prompts and templates quite easily) - that is, composing a bunch of different prompts/instructions and chaining them together into a desirable outcome. What it does boil down to, at a super high-level, is a DAG
- One thing to note, it's not restricted to OpenAI, but rather can be tweaked to work with other models as well - for instance, open source models available on hugging face.
Counter-points
LangChain
is essentially an abstraction over how you'd normally query LLMs -
so while it doesn't do anything fancy (and in rare cases, cannot be used for
super customised integrations), it does get the job done in most
application-specific use cases. Some rants on Hacker News on why it "sucks":
Case:
- Klarna has a nice documentation and implementation of this (LangChain refers to this in their docs)