Now I will start designing the engine of the virtual assistant. This engine will be the central nervous system of ow tool.
The first approach that I will follow here is very simple. This approach is summarised in the following diagram.
In this design the text to speech convert speech to text. This text is then compared with specific keywords. If a keyword is detected then the dispatcher will call the corresponding specialised module that will handle the conversation. For example if the keyword ‘How are you?’ is detected, the dispatcher will call the small talk module to handle the answer. This approach has several limitations:
- It function more as a question answering system than as a conversation system.
- It is a linear approach and once the system take a branch of the dispatcher it cannot goes back.
- The keyword matching system is very primitive and inflexible.
Even with these limitation it is nevertheless a good starting point for a virtual assistant tutorial. Also we will improve the assistant with new functionalities and upgrades in the upcoming posts.
the first module that we will create is the small talk module. Create a folder named Modules inside the brain folder. Create a file small_talk.py inside the module folder.
copy the following code.
<pre>from Affectors import tts def who_am_i(): message = 'I am Ayda, your personal assistant. How can I help you?' tts.say(message) def i_dont_understand(): tts.say('I dont dont understand what that means!')
As you can see, all what this function is doing is returning a pre-formatted answer for a certain function call. When the user ask the question ‘Who are you?’ the dispatcher will call the ‘who_am_i’ function.
the code for the dispatcher can also be very simple.
<pre>from Modules import small_talks def evaluate(speech_text): if speech_text=='who are you': small_talks.who_am_i() else: small_talks.i_dont_understand()</pre>
You can see the evaluator is simply evaluating the speech text and call the corresponding function. The function has a lot of limitations but we will improve the function later in this post.
To finalise our code we need to link the dispatcher to the main and connect it to the speech recognition engine. This also done very simply in main.py :
from Sensors import stt from Brain import dispatcher def main(): speech_text=stt.listen() print(speech_text) dispatcher.evaluate(speech_text) main()
Hello there, You have performed an excellent job. I will certainly digg it and in my opinion suggest to my friends. I am sure they will be benefited from this web site.
Saved as a favorite, I really like your web site!
I blog often and I really thank you for your information. This article has really peaked my interest. I’m going to take a note of your blog and keep checking for new information about once a week. I opted in for your Feed as well.
Hello! I could have sworn I’ve been to your blog before but after looking at many of the posts I realized it’s new to me. Anyhow, I’m definitely happy I stumbled upon it and I’ll be bookmarking it and checking back often!
certainly like your website but you have to check the spelling on quite a few of your posts. Many of them are rife with spelling issues and I find it very bothersome to tell the truth nevertheless I’ll surely come back again.
Thanks a lot for the feedback. I will review spelling mistakes and style.
Thanks , I’ve just been searching for information about this subject for ages and yours is the greatest I’ve discovered till now. But, what about the bottom line? Are you sure about the source?