This year I volunteered at NDC Sydney and decided to build a ChatBot to help attendees answer any queries that they may have at the event.
Initially, the ChatBot could only respond to basic questions related to sessions and speakers, using an adaptive card to return any relevant information to attendees. I then added an additional integration with QnAMaker, to ensure that the end-user could still receive a response to a question about something unrelated to the speaker sessions.
Despite these improvements, I was still not 100% happy with the ChatBot's functionality and decided to add more integration with Bing search. This function enabled the ChatBot to search Bing to answer any questions that it wasn’t programmed to answer, ensuring that the user would always receive an answer for their query.
With this addition, all of the necessary information was being presented by the adaptive cards, with FAQs being covered by the fall back of QnAMaker. The Bing search functionality was there as a last resort.
Before the conference started, I asked my colleagues to test the ChatBot to assess functionality and potential problems. The testing data improved the LUIS model and enabled the answers to improve.
From the feedback, I realised that there were still improvements that needed to be made. The first flagged improvement was the responsiveness of the web interface as it was required to be enabled for both desktop and mobile since I wasn’t integrating with any channel.
When I had solved the responsiveness problem, I encountered another almost immediately due to the agenda for conference changing. This was an issue, as I had decided to use a local JSON file with all the data, which meant that I needed to check the newly updated agenda and alter my file accordingly. Luckily the agenda didn’t change that much, and was relatively easy to update.
At this point I was confident with the Bot, as from the feedback that I had received, the major problems were easy to fix, with others simply being a case to train the LUIS model. I started to make improvements and as a result, had cards that showed more data from the speakers.
When the conference started I was happy with the usage, until the agenda was changed once more. An attendee asked about a session that was supposed to be taking place in a specified room but currently was not taking place in that room.
I promptly checked that the bot data was matching the attendees’ printed agenda, which it was. After checking the website, I realised that it was the information given by the bot that was incorrect. At this stage, there was no way to update the agenda without a laptop, so I tried to stop thinking about it and enjoyed the conference. We can’t win every time.
The next morning, I had some free time, so I brought my laptop and updated the agenda quickly. This didn’t go to plan, as I realised that having too much data from a session wasn’t working well and the results weren’t displaying correctly.
I missed the first session of the day as I was working on the result cards and adding another result view based on the speaker. With the new cards, the result was consistent and the user could access new information by the click of a button.
Overall the experience was positive and I would love to do it again! I will take into account some of the problems encountered this time, to make better decisions next time.
Author: Luiz Bon, 3rd October 2017
Originally posted at: https://luizbon.github.io/blog/lessons-learned-from-a-conference-chatbot/