Now I will start designing the engine of the virtual assistant. This engine will be the central nervous system of ow tool.
The first approach that I will follow here is very simple. This approach is summarised in the following diagram.
In this design the text to speech convert speech to text. This text is then compared with specific keywords. If a keyword is detected then the dispatcher will call the corresponding specialised module that will handle the conversation. For example if the keyword ‘How are you?’ is detected, the dispatcher will call the small talk module to handle the answer. This approach has several limitations:
- It function more as a question answering system than as a conversation system.
- It is a linear approach and once the system take a branch of the dispatcher it cannot goes back.
- The keyword matching system is very primitive and inflexible.
Even with these limitation it is nevertheless a good starting point for a virtual assistant tutorial. Also we will improve the assistant with new functionalities and upgrades in the upcoming posts.
the first module that we will create is the small talk module. Create a folder named Modules inside the brain folder. Create a file small_talk.py inside the module folder.
copy the following code.
<pre>from Affectors import tts def who_am_i(): message = 'I am Ayda, your personal assistant. How can I help you?' tts.say(message) def i_dont_understand(): tts.say('I dont dont understand what that means!')
As you can see, all what this function is doing is returning a pre-formatted answer for a certain function call. When the user ask the question ‘Who are you?’ the dispatcher will call the ‘who_am_i’ function.
the code for the dispatcher can also be very simple.
<pre>from Modules import small_talks def evaluate(speech_text): if speech_text=='who are you': small_talks.who_am_i() else: small_talks.i_dont_understand()</pre>
You can see the evaluator is simply evaluating the speech text and call the corresponding function. The function has a lot of limitations but we will improve the function later in this post.
To finalise our code we need to link the dispatcher to the main and connect it to the speech recognition engine. This also done very simply in main.py :
from Sensors import stt from Brain import dispatcher def main(): speech_text=stt.listen() print(speech_text) dispatcher.evaluate(speech_text) main()