Intro to Chains in the Langchain & Its Types

LangChain Chains Explained

Welcome to third blog in the Langchain series, where we get to the root of Langchain chains. So what is a Chain in LangChain? Let’s delve deeper into the topic!

What are Chains in LangChain?

Chains in the Langchain library serve as wrappers for various components within the library, enabling the combination and utilization of different primitives and functionalities. They connect these components and create a sequence of actions or operations in a structured manner.

In layman’s terms, you can understand chains as wrappers for primitive components of Langchain. Primitive components refer to Prompts, Utilities in the library, LLMs, and even other chains. 

Let’s try to understand it with a simple example using the most basic LLM chain we have been using so far :

from langchain import OpenAI , PromptTemplate
from langchain.chains import LLMChain


llm = OpenAI(
   openai_api_key="Your Open AI Api key",
   model_name="text-davinci-003",
)
template = """
   Answer the following questions using 3 bulltes points for each question:


   Question: {question}
  
   Answer:
"""
prompt_template = PromptTemplate(template=template,input_variables= ["question"])

llm_chain = LLMChain(
   llm= llm,
   prompt= prompt_template)


print(llm_chain.run(question="what is machine learning?"))

Output:

• Machine learning is a type of artificial intelligence that allows computers to learn from data, identify patterns, and make decisions without explicit programming. 
• It is a field of study that gives computers the ability to learn without being explicitly programmed. 
• It uses algorithms to analyze data, learn from it, and make predictions or decisions.

Let’s try to understand it. It uses the very first primitive which is the Prompt template:

template = """
   Answer the following questions using 3 bulltes points for each question:


   Question: {question}
  
   Answer:
"""
prompt_template = PromptTemplate(template=template,input_variables= ["question"])

By visualizing, users input a question, feed it into a prompt template, and get a response from the LLM.

This was the most simplest and common chain.

Types of Chains in LangChain

The LangChain library includes different types of chains, such as generic chains, combined document chains, and utility chains.

Generic chains, which are versatile building blocks, are employed by developers to build intricate chains, and they are not commonly utilized in isolation. They form the foundational functionality for creating chains tailored to specific use cases.

Utility chains combine a language model chain with a specific utility in the Langchain library. Designers create utility chains. These chains perform specialized tasks such as complex math computations, executing SQL commands, making API calls, or running bash commands.

The LangChain framework incorporates combined document chains, which interact with indexes and merge user data stored in the indexes with Language Model (LLM) outputs. Users utilize these chains for tasks like answering questions on user-specific documents.

What are Utility Chains?

Utility chains serve specific purposes by combining a large language model chain with another specific utility in the Langchain library. These utility chains perform specialized tasks such as complex math computations, executing SQL commands, making API calls, or running bash commands.

In this example, we will be using the LLMBash chain to understand utility chains. Let’s start with basic imports:

from langchain.chains import LLMBashChain , LLMChain
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
from langchain import PromptTemplate
import inspect

Since these LLMs use a chain of thought that is reasoning, and OpenAI’s tokens are costly we need to keep track of the number of tokens we use so we will be using a function to do the same:

def count_tokens(chain,query):
   with get_openai_callback() as cb:
       result = chain.run(query)
       print(f"spent a total of {cb.total_tokens} tokens\n")
  
   return result

Next, we need to define the basic LLM agent with a temperature of 0 so that it does not go creative with its output:

llm = OpenAI(
   openai_api_key="Your open ai key",
   model_name="text-davinci-003",
   temperature=0
)

Next, let’s initialize Bash LLM with this llm as its base with verbose = True so that it prints intermediate steps as well.

bash_llm = LLMBashChain.from_llm(llm=llm,verbose=True)


Now let’s try a sample command:

count_tokens(bash_llm,"write a query to count number of files in all subdirectories of a directory")

Output:

> Entering new LLMBashChain chain...
write a query to count number of files in all subdirectories of a directory

```bash
find . -type f | wc -l
```
Code: ['find . -type f | wc -l']
Answer: 17

> Finished chain.
spent a total of 195 tokens

'17\n'

So we get a working command which looks like it might work and most probably would work but the question that arises is how llm can generate so specific response with such accuracy so let’s try to see the prompt behind it which produces this output:

print(bash_llm.prompt.template)

Output:

If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:

Question: "copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'"

I need to take the following actions:
- List all files in the directory
- Create a new directory
- Copy the files from the first directory into the second directory
```bash
ls
mkdir myNewDirectory
cp -r target/* myNewDirectory
```

That is the format. Begin!

Question: {question}

We now see it instructs it to start with building and breaking it down into simple steps of reasoning first then start generating a response with “`bash. But why is it instructed to start its response with “`bash and not just start it directly with Bash: or something like this so for this let’s execute this command.

print(inspect.getsource(bash_llm._call))

Output:

def _call(
        self,
        inputs: Dict[str, Any],
        run_manager: Optional[CallbackManagerForChainRun] = None,
    ) -> Dict[str, str]:
        _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
        _run_manager.on_text(inputs[self.input_key], verbose=self.verbose)

        t = self.llm_chain.predict(
            question=inputs[self.input_key], callbacks=_run_manager.get_child()
        )
        _run_manager.on_text(t, color="green", verbose=self.verbose)
        t = t.strip()
        try:
            parser = self.llm_chain.prompt.output_parser
            command_list = parser.parse(t)  # type: ignore[union-attr]
        except OutputParserException as e:
            _run_manager.on_chain_error(e, verbose=self.verbose)
            raise e

        if self.verbose:
            _run_manager.on_text("\nCode: ", verbose=self.verbose)
            _run_manager.on_text(
                str(command_list), color="yellow", verbose=self.verbose
            )
        output = self.bash_process.run(command_list)
        _run_manager.on_text("\nAnswer: ", verbose=self.verbose)
        _run_manager.on_text(output, color="yellow", verbose=self.verbose)
        return {self.output_key: output}

Upon seeing the execution of the ._call method we see the output of llm is stored in a variable t.

As stated by:

t = self.llm_chain.predict(
            question=inputs[self.input_key], callbacks=_run_manager.get_child()
       )

Which is then parsed using a parser to check the validity of the code that is if it is executable.

From the line:

parser = self.llm_chain.prompt.output_parser
            command_list = parser.parse(t)

After parsing the output, the LLMBashChain runs the parsed commands using a BashProcess instance:

output = self.bash_process.run(command_list)

The LLMBashChain relies on the output parser to ensure that the generated commands are valid and can be executed by the BashProcess. If there is an issue with the output, the output parser will raise an OutputParserException, which will be caught and handled by the _call method

Let’s see why this prompt is important:

prompt = PromptTemplate(input_variables=["question"],template='{question}')


llm_chain = LLMChain(llm=llm,prompt=prompt,verbose=True)


count_tokens(llm_chain,"write a query to count number of files in all subdirectories of a directory")

Output:

> Entering new LLMChain chain...
Prompt after formatting:
write a query to count number of files in all subdirectories of a directory

> Finished chain.
spent a total of 74 tokens

"\n\nSELECT COUNT(*) FROM (SELECT * FROM sys.all_objects WHERE type = 'F') AS files INNER JOIN sys.all_objects AS directories ON files.parent_object_id = directories.object_id WHERE directories.type = 'D';"

to generate proper output. 

Similar to bash llm chain there are various other utility chains available in langchain such as BashChain, LLMCheckerChain, LLM Math and various other.

Generic Chains

A generic chain acts as a fundamental framework or template that provides a set of predefined functionalities and capabilities. It defines the basic structure and behaviour of a chain but allows for customization and specialization by incorporating additional components or modifying existing ones.

Developers can use a generic chain as a starting point and then extend or modify it to meet the specific needs of their application or system.

To illustrate the concept, let’s dive into a code example. Imagine we have a text processing application, and we want to create a chain that cleans up extra spaces and new lines in a given text and then paraphrases it in a specific style. We start by defining a transform function that performs the cleaning operation using regular expressions.

import re
from langchain.chains import TransformChain , SequentialChain
def transform_func(inputs: dict) -> dict:
   text = inputs["text"]
  
   # replace multiple new lines and multiple spaces with a single one
   text = re.sub(r'(\r\n|\r|\n){2,}', r'\n', text)
   text = re.sub(r'[ \t]+', ' ', text)


   return {"output_text": text}
clean_extra_spaces_chain = TransformChain(input_variables=["text"], output_variables=["output_text"], transform=transform_func)

With the clean_extra_spaces_chain in place, we can now create another chain that paraphrases the cleaned text in a desired style. For this, we define a template that prompts the user to paraphrase the text and specify the desired style.

template = """Paraphrase this text:


{output_text}


In the style of a {style}.


Paraphrase: """
prompt = PromptTemplate(input_variables=["style", "output_text"], template=template)

Next, we integrate the language model into our chain using the LLMChain component, which takes the language model (LLM) and the prompt as inputs. We also specify the output key to retrieve the final paraphrased output.

style_paraphrase_chain = LLMChain(llm=llm, prompt=prompt, output_key='final_output')

Now, it’s time to connect the dots and create a sequential chain that encompasses both the cleaning and paraphrasing operations. We define the input and output variables of the chain, which are ‘text’ and ‘style’, respectively.

sequential_chain = SequentialChain(chains=[clean_extra_spaces_chain, style_paraphrase_chain], input_variables=['text', 'style'], output_variables=['final_output'])

Let’s test our sequential_chain with an input text and a desired style to demonstrate the power of chains. We use the count_tokens function to measure the number of tokens used in the process.

input_text = """
Chains allow us to combine multiple




components together to create a single, coherent application.


For example, we can create a chain that takes user input,       format it with a PromptTemplate,


and then passes the formatted response to an LLM. We can build more complex chains by combining     multiple chains together, or by




combining chains with other components.
"""
print(count_tokens(sequential_chain, {'text': input_text, 'style': 'poet'}))

In this example, the sequential_chain receives the input_text and ‘poet’ as the desired style. It performs the cleaning operation first, removing extra spaces and new lines. Then, it passes the cleaned text to the language model, which paraphrases the text in a poetic style. The count_tokens function calculates the total number of tokens used in the process, indicating the efficiency of our chain.

Output:

spent a total of 163 tokens


Chains bind us, let us join
Components, make one app shine.
For instance, take user input,
Format it with PromptTemplate,
Then pass it to an LLM.
More complex chains we can build,
By combining chains, or with other components filled.

With the flexibility and customization provided by generic chains, we can create complex and cohesive systems. They empower developers to combine various functionalities and adapt them to meet specific needs. So, embrace the power of chains and unlock the full potential of your applications.

Takeaways

Now you have an a clear understanding of Chains in LangChain and Its main types. They are used to allow combine multiple components together to create single application.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like