graph LR
Input_Data_Preparation["Input & Data Preparation"]
Knowledge_Base_Retrieval["Knowledge Base & Retrieval"]
LLM_Core_Prompting["LLM Core & Prompting"]
Workflow_Agent_Orchestration["Workflow & Agent Orchestration"]
Tool_Output_Processing["Tool & Output Processing"]
Memory_State_Management["Memory & State Management"]
Evaluation["Evaluation"]
Input_Data_Preparation -- "sends processed documents to" --> Knowledge_Base_Retrieval
Knowledge_Base_Retrieval -- "provides context to" --> LLM_Core_Prompting
LLM_Core_Prompting -- "passes raw responses to" --> Tool_Output_Processing
Tool_Output_Processing -- "guides" --> Workflow_Agent_Orchestration
Workflow_Agent_Orchestration -- "initiates calls to" --> LLM_Core_Prompting
Workflow_Agent_Orchestration -- "invokes" --> Tool_Output_Processing
Workflow_Agent_Orchestration -- "accesses and updates" --> Memory_State_Management
Workflow_Agent_Orchestration -- "queries" --> Knowledge_Base_Retrieval
Memory_State_Management -- "utilizes" --> Knowledge_Base_Retrieval
Evaluation -- "assesses performance of" --> LLM_Core_Prompting
Evaluation -- "assesses performance of" --> Workflow_Agent_Orchestration
Evaluation -- "assesses performance of" --> Tool_Output_Processing
click Input_Data_Preparation href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/langchain/Input_Data_Preparation.md" "Details"
click Knowledge_Base_Retrieval href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/langchain/Knowledge_Base_Retrieval.md" "Details"
click LLM_Core_Prompting href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/langchain/LLM_Core_Prompting.md" "Details"
click Tool_Output_Processing href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/langchain/Tool_Output_Processing.md" "Details"
click Memory_State_Management href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/langchain/Memory_State_Management.md" "Details"
The LangChain architecture is a modular framework for building sophisticated AI applications, emphasizing a pipeline-driven approach. It orchestrates a flow that begins with Input & Data Preparation, feeding into a Knowledge Base & Retrieval system for contextual information. This context, along with user prompts, is processed by the LLM Core & Prompting component, which interacts with various Large Language Models. The LLM's raw output is then handled by Tool & Output Processing for structuring and external interactions. The central Workflow & Agent Orchestration component dynamically manages the application's logic, composing chains and agents that leverage LLMs, tools, and Memory & State Management for persistent context. This design promotes extensibility, allowing developers to swap out different implementations of components (e.g., LLM providers, vector stores) without altering the overall application logic, while Evaluation provides continuous performance assessment.
Input & Data Preparation [Expand]
Manages initial user input and the ingestion, loading, and transformation of raw data into a usable format for the system.
Related Classes/Methods:
Knowledge Base & Retrieval [Expand]
Responsible for converting data into embeddings, storing and managing them in vector stores, and efficiently retrieving relevant information to provide context.
Related Classes/Methods:
langchain.embeddings.baselangchain_qdrant.vectorstoreslangchain.retrievers.multi_querylangchain_core.indexing.api
LLM Core & Prompting [Expand]
Handles all interactions with Large Language Models, including model initialization, prompt construction, and generating responses. It abstracts away provider-specific details.
Related Classes/Methods:
The central control unit that defines and executes complex multi-step workflows, including sequential chains, composable runnables, and intelligent agents capable of dynamic decision-making.
Related Classes/Methods:
langchain.chains.baselangchain_core.runnables.baselangchain.agents.agentlangchain.agents.agent_iterator
Tool & Output Processing [Expand]
Manages the definition and execution of external tools used by agents and chains, and processes raw LLM outputs into structured, actionable formats.
Related Classes/Methods:
langchain_core.tools.baselangchain_core.output_parsers.jsonlangchain.agents.output_parsers.openai_functions
Memory & State Management [Expand]
Provides mechanisms for storing and retrieving conversational history and other relevant state information, enabling continuity across interactions.
Related Classes/Methods:
Provides tools and frameworks for assessing the performance and quality of LLMs, chains, and agents against defined criteria or datasets.
Related Classes/Methods: