(13.12.2021 - Christmas Break - 16.01.2022)

Hugging Face

Hugging Face is a large open-source community that is used as an enticing hub for pre-trained deep learning models, mainly aimed at NLP.

Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio. They provided us with an API to quickly download and use TaPas that was trained on large Sequential Question Answering (SQA) data.

We then fine-tune them on our own datasets in this case medical data that we generated locally. We then pushed our new fine-tuning model to hugging face allowing us to share it between us and be used on separate programs.

In the process of saving the model to the hub, the following files are downloaded and added to the repo. These files are essential for using the model at a later stage

Untitled

Bot Pipeline

Our backend TaPas model is now ready and we moved on to the next phase, which is building the chatbot. We used Rasa Open Source for the bot production as it could be implemented easily. We started by coding the training data, which are:

  1. Natural Language Understanding (NLU)
  2. Rules
  3. Stories
  4. Domain

Next, a custom bot action is added to integrate TaPas into the backend. Whenever the user states their desired query file and questions, the information will be sent to the backend through the Rasa Action Server and our custom action method. Once processing is done, the returned answers will be packaged and dispatched through the REST API, then rendered on the bot’s user interface (UI). In order to support the Rasa Action Server, its endpoint URL is given in the “endpoints.yml” file to establish the connection.

Once the training data and endpoint configuration are done, Rasa’s built-in natural language processing (NLP) training is run to generate our defined models. Iterations are done repeatedly to fix dialogue logic issues and bugs. We now have the completed pipeline of Bot-TaPas. However, the bot is currently available only on the terminal. Therefore, our next step is to complete the full-stack by implementing a UI for simplicity and a better user experience.

A current photo of the bot in the terminal is shown below:

Untitled