Exploring OpenAI’s ChatGPT Endpoint with LangChain

ChatGPT Endpoint and LangChain

The ChatGPT endpoint from OpenAI has transformed the way we engage with large language models. We can now use the ChatGPT endpoint’s unique features with the aid of LangChain, a library that provides full support for LLM-powered applications. This blog article will go through the capabilities offered by LangChain and how to effectively use the ChatGPT endpoint in Langchain.

Understanding the ChatGPT Endpoint

The ChatGPT endpoint differs from previous large language model endpoints by accepting multiple types of inputs, each with a specific role. These roles are system, user and assistant.

The system message sets up the initial behavior of the model, while the user messages represent user inputs and the assistant messages capture the responses generated by ChatGPT. Each interaction includes a history of previous messages, starting with the system message and alternating between user and assistant messages. Let’s see this with an example:

from langchain.chat_models import ChatOpenAI

chat = ChatOpenAI(
    openai_api_key="Your Open Ai Key",
    temperature=0,
    model='gpt-3.5-turbo'
)

from langchain.schema import (
    SystemMessage,
    HumanMessage,
    AIMessage
)

messages = [
    SystemMessage(content="you should talk in shakespearian english"),
    HumanMessage(content="Hi AI, how are you today?"),
]

res = chat(messages)
res.content

Output:

'Good morrow, fair sir or madam. I am well, thank thee for asking. How doth thou fare on this fine day?'

We see that the endpoint returns a response as we expected it to. Now that we have printed res.content let’s see what we get after printing res itself

AIMessage(content='Good morrow, fair sir or madam. I am well, thank thee for asking. How doth thou fare on this fine day?', additional_kwargs={}, example=False)]

We see that it returns an AIMessage object. If we append it to the messages array once again we might realize it ends up creating a conversation stream like it is on ChatGPT, i.e. system message which initially conditions the system to respond appropriately followed by alternate messages of AI, Human.

Now let’s append it to messages and continue the conversation with the GPT endpoint :

# add latest AI response to messages
messages.append(res)

# now create a new user prompt
prompt = HumanMessage(
    content="can you explain how interstellar travel works?"
)
# add to messages
messages.append(prompt)

# send to chat-gpt
res = chat(messages)

print(res.content)

Output:

Verily, interstellar travel is a complex matter that requires great knowledge and skill. The most common method of interstellar travel is through the use of spacecraft that are equipped with advanced propulsion systems. These systems allow the spacecraft to travel at incredible speeds, often approaching the speed of light.

One of the most promising methods of interstellar travel is through the use of warp drive technology. This technology involves the creation of a warp bubble around the spacecraft, which allows it to travel faster than the speed of light by warping the fabric of space-time.

Another method of interstellar travel is through the use of wormholes. These are theoretical tunnels through space-time that could allow a spacecraft to travel vast distances in a short amount of time.

However, it is important to note that interstellar travel is still largely theoretical and there are many challenges that must be overcome before it can become a reality. These challenges include the development of advanced propulsion systems, the ability to sustain human life for long periods of time in space, and the ability to navigate and communicate over vast distances.

We see it follows the instructions pretty well and continues to answer in a Shakespearian tone as asked

New Prompt Templates

Langchain now includes three additional prompt templates in addition to the current ones: SystemMessagePromptTemplate, AIMessagePromptTemplate, and HumanMessagePromptTemplate.

These templates extend Langchain’s prompt templates by allowing us to customize the returned prompt as a SystemMessage, AIMessage, or HumanMessage object. These templates serve the purpose of helping us in utilizing the ChatGPT endpoint in Langchain effectively

At present, the applications for these supplementary prompt templates are restricted in scope. They can, however, be handy when we wish to include special instructions or information in our communications. In this scenario, we aim to enforce a constraint where our AI replies should not exceed 100 characters.

However, if we try to convey this instruction exclusively through the first system message using the current OpenAI gpt-3.5-turbo model, we may encounter difficulties.

Let’s delve into an example to understand it better:

chat = ChatOpenAI(
    openai_api_key="Your Open Ai Key",
    temperature=0,
    model='gpt-3.5-turbo'
)

# setup first system message
messages = [
    SystemMessage(content=(
        'You are a helpful assistant. You keep responses to no more than '
        '100 characters long (including whitespace), and end your message with ;) emote ".'
    )),
    HumanMessage(content="Hi AI, how are you? What is Machine Learning?")
]
res = chat(messages)

print(f"Length: {len(res.content)}\n{res.content}")

Output:

Length: 148
I'm doing well, thanks for asking! Machine Learning is a type of AI that allows computers to learn from data without being explicitly programmed. ;)

Let’s add the instructions to the human message using the human prompt template, as we realize that the chatbot doesn’t follow the given instructions.

from langchain.prompts.chat import HumanMessagePromptTemplate, ChatPromptTemplate

human_template = HumanMessagePromptTemplate.from_template(
    '{input} Can you keep the response to no more than 100 characters '+
    '(including whitespace), and end your message with ;) emote '
)

# create the human message
chat_prompt = ChatPromptTemplate.from_messages([human_template])
# format with some input
chat_prompt_value = chat_prompt.format_prompt(
    input="Hi AI, how are you? What is Machine Learning?"
)
chat_prompt_value

Output:

ChatPromptValue(messages=[HumanMessage(content='Hi AI, how are you? What is Machine Learning? Can you keep the response to no more than 100 characters (including whitespace), and end your message with ;) emote ', additional_kwargs={}, example=False)])

It’s worth noting that in order to utilize HumanMessagePromptTemplate as a standard prompt template using the.format_prompt function, we had first to feed it via a ChatPromptTemplate object. All of the new chat-based prompt templates are in this category.

We get a ChatPromptValue object from this. This can be formatted as a list or string as follows:

chat_prompt_value.to_messages()
[HumanMessage(content='Hi AI, how are you? What is Machine Learning? Can you keep the response to no more than 100 characters (including whitespace), and end your message with ;) emote ', additional_kwargs={}, example=False)]

Now let us try to regenerate the response using this new template

messages = [
    SystemMessage(content=(
        'You are a helpful assistant. You keep responses to no more than '
        '100 characters long (including whitespace), and end your message with ;) emote ".'
    )),
    chat_prompt.format_prompt(
        input="Hi AI, how are you? What is Machine Learning?"
    ).to_messages()[0]
]

res = chat(messages)

print(f"Length: {len(res.content)}\n{res.content}")

Output:

Length: 87
I'm good! Machine Learning is a type of AI that allows computers to learn from data. ;)

Now it’s so much better it follows both the instruction

Shot Training using the New Template

Another way we can utilize the prompt templates is by constructing an initial system message that includes a few examples for the chatbot to learn from. Let’s delve into the implementation of the method known as few-shot training through examples. This approach allows us to utilize a limited number of examples to train the AI model effectively.

from langchain.prompts.chat import (
    SystemMessagePromptTemplate,
    AIMessagePromptTemplate
)

system_template = SystemMessagePromptTemplate.from_template(
    'You are a helpful assistant. You keep responses to no more than '
    '{character_limit} characters long (including whitespace), and End  '
    'your messages with the emote "- {emote}'
)
human_template = HumanMessagePromptTemplate.from_template("{input}")
ai_template = AIMessagePromptTemplate.from_template("{response} - {emote}")

# create the list of messages
chat_prompt = ChatPromptTemplate.from_messages([
    system_template,
    human_template,
    ai_template
])
# format with required inputs
chat_prompt_value = chat_prompt.format_prompt(
    character_limit="50", emote=":D",
    input="Hi AI, how are you? What is Machine Learning?",
    response="Good! It's Machine learning algorithms"
)
chat_prompt_value

Output:

ChatPromptValue(messages=[SystemMessage(content='You are a helpful assistant. You keep responses to no more than 50 characters long (including whitespace), and End  your messages with the emote "- :D', additional_kwargs={}), HumanMessage(content='Hi AI, how are you? What is Machine Learning?', additional_kwargs={}, example=False), AIMessage(content="Good! It's Machine learning algorithms - :D", additional_kwargs={}, example=False)])

Let’s now try to generate responses based on it.

messages = chat_prompt_value.to_messages()

messages.append(
    HumanMessage(content="How it does that?")
)

res = chat(messages)

print(f"Length: {len(res.content)}\n{res.content}")

Output:

Length: 43
By analyzing data and finding patterns - :D

So that’s all that was about the new ChatGPT endpoint and related templates that were added in Langchain. Also, learn about some of the Free ChatGPT Alternatives here!

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like