Integrating LLMs into the web apps sometimes feels like organizing a discussion between the app and the LLM. Sometimes the program needs to ask for something in a loop, sometimes it needs to evaluate the LLM’s output, and often it serves as a mediator between the human and the LLM. The keyword for this is orchestration. To implement high-quality orchestration, we need robust abstractions. Let’s see why a graph is a good choice for this.
One of the most exciting things for me in building software is scale. I enjoy challenges related to handling large amounts of data, dealing with various limitations, and processing information efficiently. In this article, I will focus on the Map-Reduce pattern, which, combined with parallelization, will help us summarize an entire blog by its post contents.
Let's dive into the topic of GPT-4o-mini's context window and explore strategies and tools for managing it in JavaScript by building a keyword extractor capable of handling large amounts of text data.
Let's talk about Async Local Storage in Node.js. We'll explore where it can be used and its use cases. We'll also attempt to reimplement one of the features from one of the many popular meta-frameworks out there.