In this guide, you’ll learn how to create a simple yet powerful chatbot using the LangChain framework and models available on Hugging Face. The chatbot leverages Hugging Face’s extensive library of pre-trained language models, allowing you to build a responsive AI assistant without needing to train a model from scratch.

  1. Installing the langchain_community package:
!pip install langchain_community

!pip install langchain_community
This line installs the langchain_community package using pip. The ! at the beginning indicates that this is a shell command, which is used in environments like Jupyter notebooks. LangChain Community provides community-contributed integrations and models.

  1. Importing necessary classes:
from langchain import HuggingFaceHub
from langchain import PromptTemplate, LLMChain

from langchain import HuggingFaceHub
This imports the HuggingFaceHub class from the langchain library. The HuggingFaceHub is used to connect to and interact with models hosted on Hugging Face’s platform.

from langchain import PromptTemplate, LLMChain
This imports two additional classes:

  • PromptTemplate: A class to define how the prompt (input text) will be structured for the language model.
  • LLMChain: A class to link the prompt with the language model and run the entire process, managing the input/output.
  1. Setting up the model repository ID and API token:
repo_id = "tiiuae/falcon-7b-instruct"
huggingfacehub_api_token = "your-access-token"

repo_id = "tiiuae/falcon-7b-instruct"
This defines the repo_id, which refers to the specific model you want to use from Hugging Face. In this case, it’s tiiuae/falcon-7b-instruct, which is a large language model designed for instruction-based tasks.

huggingfacehub_api_token = "your-huggingface-access-token"
This stores the API token required to authenticate your request to Hugging Face’s API. You’ll need this token to access the model. (Note: You should avoid sharing your token publicly for security reasons.) To get you access token, sign up on HuggingFace and go to settings.

  1. Initializing the Hugging Face model with parameters:
llm = HuggingFaceHub(
huggingfacehub_api_token = huggingfacehub_api_token,
repo_id = repo_id,
model_kwargs = {
"temperature": 0.7,
"max_new_tokens": 500
}
)

llm = HuggingFaceHub(huggingfacehub_api_token=huggingfacehub_api_token, repo_id=repo_id, model_kwargs={"temperature":0.7, "max_new_tokens":500})

  • This creates an instance of the HuggingFaceHub class, which will allow interaction with the Falcon 7B model hosted on Hugging Face.
  • The huggingfacehub_api_token argument passes your authentication token.
  • The repo_id argument specifies the model you want to use.
  • model_kwargs passes additional configuration for the model:
    • temperature: Controls the randomness of the model’s output (0.7 indicates moderate creativity).
    • max_new_tokens: Limits the number of tokens (words/characters) generated in the model’s response to 500.
  1. Defining the prompt template:
template = """
You are a helpful AI assistant and provide the answer for the question asked politely.

{question}
"""

template = """\nYou are a helpful AI assistant and provide the answer for the question asked politely.\n\n{question}\n"""
This defines the prompt template that the model will use. It includes a simple instruction telling the AI to act as a helpful assistant, followed by a placeholder {question} where the actual question will be inserted.

  1. Creating the prompt template object:
prompt = PromptTemplate(template=template, input_variables=["question"])

prompt = PromptTemplate(template=template, input_variables=["question"])
This creates an instance of the PromptTemplate class. It takes the template defined earlier and specifies that the input variable question will be dynamically provided when the prompt is run.

  1. Creating the LLMChain to connect the model with the prompt:
chain = LLMChain(prompt=prompt, llm=llm)

chain = LLMChain(prompt=prompt, llm=llm)
This creates a chain by linking the prompt and the language model (Falcon 7B in this case). The LLMChain class will use the llm (language model) and the prompt to generate responses based on user input.

  1. Running the model with a question input:
out = chain.run("Tell me a roadmap for AI engineer as a beginner")

out = chain.run("Tell me a roadmap for AI engineer as a beginner")

  • This line runs the chain with a specific input question: “Tell me a roadmap for AI engineer as a beginner.”
  • The run() method passes the question to the model via the prompt and returns the generated response.
  1. Printing the model’s output:
print(out)
Tell me a roadmap for AI enginner as a beginner
As a beginner in the field of AI engineering, you can start with the following steps:

1. Learn the basics of programming languages such as Python, R, or Java.
2. Familiarize yourself with machine learning concepts such as supervised and unsupervised learning.
3. Gain knowledge in deep learning techniques, which involves the use of algorithms to analyze large amounts of data.
4. Learn about the different types of models such as neural networks, decision trees, and clustering techniques.
5. Focus on developing your problem-solving and analytical skills.
6. Start practicing by working on real-world problems and build your portfolio.
7. Stay up-to-date with the latest advancements in the field by attending conferences, seminars, and workshops.
8. Network with professionals in the AI industry to gain insights and advice.

Remember, becoming an expert in AI engineering takes time and patience. Be patient and persistent in your learning journey!

By Aman Singh

He is a computer science engineer specialized in artificial intelligence and machine learning. His passion for technology and a relentless drive for continuous learning make him a forward-thinking innovator. With a deep interest in leveraging AI to solve real-world challenges, he is always exploring new advancements and sharing insights to help others stay ahead in the ever-evolving tech landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *