Build and Deploy Your Personal Terminal ChatGPT Bot in Python with OpenAI APIs

Conversational bots have become increasingly popular, providing an interactive way for users to engage with technology. OpenAI’s “GPT” (Generative Pre-trained Transformer) models enable developers to create sophisticated conversational agents.

In this tutorial, you will build and deploy your personal terminal ChatGPT bot using Python and OpenAI APIs on a ccloud³ VM running Ubuntu.

By the end of this tutorial, you will have a fully functional bot that can handle user queries directly from the terminal, offering an engaging and dynamic user experience. Whether you’re a seasoned developer or just starting, this tutorial will equip you with the knowledge to harness the power of ChatGPT in your projects and build your own custom AI bots.

Prerequisites

Before diving into the implementation, ensure you have the following:

  • A ccloud³ VM available with at least 4GB RAM and 2 CPUs.
  • Python 3.7 or higher installed on your Ubuntu ccloud³ VM.
  • Basic knowledge of Python programming
  • An OpenAI account with access to the ChatGPT APIs.

Step 1 – Setting Up the Environment

In this step, you will set up the environment to build and deploy your ChatGPT terminal bot on a ccloud³ VM running Ubuntu.

Creating a ccloud³ VM

Login to your ccenter Account.

Now, let’s create a ccloud³ VM:

  • Navigate to the ccloud³ Section.
  • Click on “Install new Instance”
  • Choose the Ubuntu operating system (preferably the latest LTS version).
  • Select your preferred Pool and ressources.
  • Add your SSH keys for secure access or choose a Password.
  • Name your new ccloud³ VM.
  • Click “Create VM”

Connecting to Your ccloud³ VM

On your local machine, open a terminal. Use the command below, replacing <your_ccloud³-VM_ip> with your ccloud³ VMs IP address:

Setting Up Python Environment

Run the following commands to ensure your system is up-to-date:

sudo apt update
sudo apt upgrade

Install Python and pip using the following commands:

sudo apt install python3 python3-pip

Let’s install virtualenv to create isolated Python environments:

sudo pip3 install virtualenv

Navigate to your desired directory and create a project folder:

mkdir my_chatgpt_bot
cd my_chatgpt_bot

Create and activate a virtual environment:

virtualenv venv
source venv/bin/activate

Install Required Python Packages

Install the openai package and any other dependencies:

Configuring the OpenAI API Key

First, obtain Your OpenAI API Key.

Now, let’s set the Environment Variables:

Store your API key securely in an environment variable. Open the .bashrc or .bash_profile file and add:

export OPENAI_API_KEY='your-api-key-here'

Reload the environment variables:

Confirm that you have set your environment variable using the following command from the terminal.

With the environment set up, you can start developing your ChatGPT bot. In the next step, we will write the bot’s code to handle user queries and interact with the OpenAI API.

Step 2 – Building the ChatGPT Bot

Now that we have set up the environment let’s build the ChatGPT bot. You will use the legacy gpt-turbo-3.5 model.

Here, you will use the three crucial libraries- openai, textract, and glob to implement this.

OpenAI is a leading artificial intelligence research organization that has developed the ChatGPT API, which allows us to interact with the powerful ChatGPT model. With the OpenAI API, you can send prompts and receive responses from the ChatGPT model, enabling you to create conversational chatbots. You can learn more about OpenAI and its offerings here.

The second textract Python library package provides text extraction capabilities from various file formats. It supports a wide range of file formats, including but not limited to:

  • Text-based formats: TXT, CSV, JSON, XML, HTML, Markdown, and LaTeX.
  • Document formats: DOC, DOCX, XLS, XLSX, PPT, PPTX, ODT, and ODS.
  • eBook formats: EPUB, MOBI, AZW, and FB2.
  • Image formats with embedded text: JPG, PNG, BMP, GIF, TIFF, and PDF (both searchable and scanned).
  • Programming source code files: Python, C, C++, Java, JavaScript, PHP, Ruby, and more.

The glob package in Python is a built-in module that provides a convenient way to search for files and directories using pattern matching. It allows you to find files matching a specified pattern, such as all files with a particular extension or specific naming patterns. You will also use the terminal bot to provide answers based on the data we feed on our local system inside the /data directory in your project’s folder.

Next, let’s install the required textract Python library, as it does not come pre-installed:

Now create a new file called mygptbot.py and copy and paste the below code:


import os
import glob
import openai
import textract

class Chatbot:
    def __init__(self):
        self.openai_api_key = os.getenv("OPENAI_API_KEY")
        self.chat_history = []

    def append_to_chat_history(self, message):
        self.chat_history.append(message)

    def read_personal_file(self, file_path):
        try:
            text = textract.process(file_path).decode("utf-8")
            return text
        except Exception as e:
            print(f"Error reading file {file_path}: {e}")
            return ""

    def collect_user_data(self):
        data_directory = "./data"
        data_files = glob.glob(os.path.join(data_directory, "*.*"))
        user_data = ""
        for file in data_files:
            file_extension = os.path.splitext(file)[1].lower()
            if file_extension in (".pdf", ".docx", ".xlsx", ".xls"):
                user_data += self.read_personal_file(file)
            else:
                with open(file, "r", encoding="utf-8") as f:
                    user_data += f.read() + "\n"
        return user_data

    def create_chat_response(self, message):
        self.append_to_chat_history(message)
        user_data = self.collect_user_data()
        messages = [
            {"role": "system", "content": "You are the most helpful assistant."},
            {"role": "user", "content": message},
            {"role": "assistant", "content": message},
        ]
        if user_data:
            messages.append({"role": "user", "content": user_data})

        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=messages,
            temperature=0.7,
            max_tokens=256,
            top_p=0.9,
            n=2,
            stop=None,
            frequency_penalty=0.9,
            presence_penalty=0.9
        )

        self.append_to_chat_history(response.choices[0].message.content.strip())
        return response.choices[0].message.content.strip()

    def start_chatting(self):
        while True:
            user_input = input("User: ")
            if user_input.lower() == "exit":
                print("Chatbot: Goodbye boss, have a wonderful day ahead!")
                break
            bot_response = self.create_chat_response(user_input)
            print("Chatbot:", bot_response)

chatbot = Chatbot()
chatbot.start_chatting()

Model Parameters

Firstly, the model’s parameters, in a nutshell, do this:

  • Temperature: Controls the randomness of the responses. Higher values (e.g., 1.0) make the output more diverse, while lower values (e.g., 0.2) make it more focused and deterministic.
  • Max Tokens: Limits the length of the response generated by the model.
  • Top P: Specifies the cumulative probability threshold for choosing the next token. Higher values (e.g., 0.9) result in more focused responses.
  • N: Determines the number of different responses generated by the model, which helps explore different possibilities.
  • Stop: Allows us to specify a stopping phrase to indicate the end of the response.
  • Frequency Penalty: Controls the model’s likelihood of repeating the same response.
  • Presence Penalty: Controls how much the model considers using a token that hasn’t been mentioned in the conversation.

You can find more about fine-tuning these parameters here.

Secondly, the functions defined above do the following:

Functions Overview

  • append_to_chat_history(message): Appends the user’s message to the chat history stored in the chat_history list.
  • read_personal_file(file_path): Uses textract to extract text from personal files.
  • collect_user_data(): Collects user data stored in the “/data” directory, extracts text, and returns it as a string.
  • create_chat_response(message): Constructs chat responses using OpenAI’s ChatCompletion API.
  • start_chatting(): Initiates an interactive chat session with the user.

In the end, the while true loop continuously prompts the user for input. To exit the chatbot, type “exit.”

You’ll need to open your ccloud³ VM’s console and then go ahead and execute the Python file. In your ccloud³ VM’s console, run the following command:

Running the ChatGPT Bot

This is how you can easily interact with the ChatGPT bot and use it for multitasking, asking questions, and much more.

Your personal ChatGPT bot is now ready to chat. Start interacting with it by entering messages; the bot will respond accordingly. When you’re finished, type “exit” to end the conversation.

Conclusion

You have learned how to create and deploy a powerful ChatGPT bot on your Ubuntu machine using Python. The provided code allows your bot to consider and utilize personal user data from various file formats, enabling a more personalized user experience. You can integrate it with other platforms or build a web-based chatbot. With the versatility of ChatGPT and the simplicity of Python, the possibilities are endless.

Feel free to customize further and enhance your bot’s capabilities.

Source: digitalocean.com

Create a Free Account

Register now and get access to our Cloud Services.

Posts you might be interested in: