v3 - backup of Maple code aka Maxwell
Go to file
.[d]. aa4c40d32c Maple Backup 2021-10-05 09:09:45 -05:00
gpt2bot Maple Backup 2021-10-05 09:09:45 -05:00
.gitignore Maple Backup 2021-10-05 09:09:45 -05:00
Demo.ipynb Maple Backup 2021-10-05 09:09:45 -05:00
Dockerfile Maple Backup 2021-10-05 09:09:45 -05:00
LICENSE Maple Backup 2021-10-05 09:09:45 -05:00
README.md Maple Backup 2021-10-05 09:09:45 -05:00
reddit.png Maple Backup 2021-10-05 09:09:45 -05:00
requirements.txt Maple Backup 2021-10-05 09:09:45 -05:00
telegram_bot.gif Maple Backup 2021-10-05 09:09:45 -05:00

Maxwell - A DialoGPT variant for discord.py

Maxwell is my experiment with Microsoft's DialoGPT model and OpenAI's GPT-2 language model, for use on any discord server. I am planning on fine-tuning much more in the future but for now DialoGPT's model performs admirably. As of 3-2-2020 I am running this model on my old bot, Maple, which you can invite here: Lithium stats on Discord Bot List

Setup

The recommended python version is 3.6.8.

Requirements

You can install all requirements with "pip install -r requirements.txt".

python 3.6.8 numpy 1.16.4 torch 1.2.0 transformers 2.3.0 python-telegram-bot 12.3.0 (Only if you are using the telegram bot) discord.py 1.2.5 goolgetrans 2.4.0 (For automatic translation with non-english users) textblob 0.15.3 (Used in some text processing cases) matplotlib 2.0.2 (Used for modeling statistics)

In discord_bot.py, at line 117, replace "TOKEN_GOES_HERE" with your discord bot's API token. The model will automatically download and set up upon the first run of the program; you should be good to go!

gpt2bot - Original Readme

User >>> Can we achieve singularity?
Bot >>> What does this mean?
User >>> Can computers become smarter than humans?
Bot >>> Is there any evidence that this is possible?
User >>> It must be doable
Bot >>> But... how?
User >>> I want to find out, that's why I created you
Bot >>> You created me?
User >>> You're my bot
Bot >>> You monster

gpt2bot implements

The bot is built around DialoGPT - a large-scale pretrained dialogue response generation model trained by Microsoft, which was trained on 147M multi-turn dialogue from Reddit discussion thread. The human evaluation results indicate that its quility is comparable to human response quality under a single-turn conversation Turing test.

Since even with properly filtered Reddit dataset the model can generate toxic/inappropriate responses, the Microsoft team was unable to provide the decoding script. This repository implements the decoding script inspired by run_generation.py released earlier by Hugging Face. Moreover, it implements a Telegram bot that can be deployed locally, remotely, and even on Colab, and just makes testing fun.

How to use?

1. Create a Telegram bot

2. Deploy the bot

Google Colab

A Colab interactive notebook

A good thing about Google Colab is free GPU. So why not running the Telegram bot there, for blazingly fast chat? Run the notebook at daytime and do not forget to stop it at night.

Docker

  • Clone the repository
  • Set your parameters such as API token in dialog.cfg
  • To avoid re-downloading model files at each re-deployment, download the model files beforehand with
# cd gpt2bot/gpt2bot
python model.py
  • Finally, deploy the container from the root folder
docker build -t gpt2bot . && docker run gpt2bot

Manually

  • Clone the repository
  • Set your parameters such as API token in dialog.cfg
  • Install packages listed in requirements.txt
  • Run the script
# cd gpt2bot/gpt2bot
python telegram_bot.py
  • To test the things out in the console, run
python interactive_bot.py

3. Start chatting!

Just start texting. Append @gif for the bot to generate a GIF instead of text. To reset, type "Bye".

Updates

18/01/2020

  • EOS token is being checked during generation -> gpt2bot is now fast enough to be run on CPU.
  • Add support for maximum mutual information (MMI) -> more quality, but slower.

References

You can wait for a full DialoGPT release and then replace the decoder.