YouTube Video For This Post System Details: Windows 10 1. Install Rasa-Core and Spacy as shown here in this link. 2. (py36) ~\Desktop>python -m spacy download en What does “python -m” do? The first line of theRationale
section of PEP 338 says: Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose. So you can specify any module in Python's search path this way, not just files in the current directory. You're correct thatpython mymod1.py mymod2.py args
has exactly the same effect. The first line of theScope of this proposal
section states: In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line. With-m
more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Ref: The 'm' switch
3.
Install Node.js and NPM
4.
Run this command in CMD: “npm i -g
rasa-nlu-trainer”
You
can use it with all npm-install flags. For example below
will install angular-js and live server by using npm
i
npm
i angular2@2.0.0-alpha.45 --save
--save-exact
npm
i live-server --save-dev
|
…
npm install (in package
directory, no arguments):
Install the
dependencies in the local node_modules folder.
In global mode (ie,
with
-g or --global appended to the
command), it installs the current package context (ie, the current working
directory) as a global package.
|
5.
“cd ~\Desktop\Hello World chat-bot
using Rasa”
mkdir data
cd data
echo “data” > data.json
6.
In the “data.json”, we enter common phrases and
the intents we would want our chatbot to know.
{
"rasa_nlu_data":
{
"common_examples":
[
{
"text":
"Hello",
"intent":
"greet",
"entities":
[]
},
{
"text":
"goodbye",
"intent":
"goodbye",
"entities":
[]
},
{
"text":
"What's
the
weather in Berlin at the moment?",
"intent":
"inform",
"entities":
[
{
"start":
22,
"end":
28,
"value":
"Berlin",
"entity":
"location"
}
]
},
]
}
}
|
To extract the “start” and “end” for a “text” and
“value”, use the following Python script:
haveMoreText = True
while(haveMoreText):
textStr
= input("Enter the text: ")
if(not
(len(textStr) > 0)):
haveMoreText
= False
break
haveMoreEntities
= True
while(haveMoreEntities):
entityStr
= input("Enter the entity to want to locate: ")
if(not
(len(entityStr) > 0)):
haveMoreEntities
= False
break
start
= textStr.find(entityStr)
end
= start + len(entityStr)
print('{"start":
' + str(start) + ', "end": ' + str(end) + ', "value":
"' + entityStr + '", "entity": ""}')
|
We can generate the above text using Rasa.
We go to the data folder “~\Desktop\Hello World
chatbot\data”
Enter this:
(base) ~\Desktop\Hello World chatbot>rasa-nlu-trainer
...
For the demo, you can copy the data file provided here on
the author’s GitHub page.
1.
(base) ~\Desktop\Hello World
chatbot>echo 'config' > config_spacy.json
2.
Three main parameters that go in configuration
file:
{
"pipeline": "spacy_sklearn",
"path": "./models/nlu",
"data": "./data/data.json"
}
Path: Where we are going to store our model after it is
trained
Data: This is where our training data is.
7.
We create the file “nlu_model.py”, here we write
our model training code.
# Code using latest Rasa NLU and Rasa Core release (Mar
2019)
from rasa_nlu.training_data import load_data
from rasa_nlu import config
from rasa_nlu.model import Trainer
from rasa_nlu.model import Metadata, Interpreter
# Parameters: "data" == "data
file", "configs" == "config file we created",
"model_dir" == "directory where model is stored after it is
trained"
def train_nlu(data, configs, model_dir):
training_data
= load_data(data)
trainer =
Trainer(config.load(configs))
trainer.train(training_data)
model_directory = trainer.persist(model_dir,
fixed_model_name = 'weathernlu')
def run_nlu():
interpreter =
Interpreter.load('./models/nlu/default/weathernlu')
print(interpreter.parse(u"I
am planning my holiday to Lithuania. I wonder what is the weather out
there."))
if __name__ == '__main__':
train_nlu('./data/data.json',
'config_spacy.json', './models/nlu')
run_nlu()
|
(py36) ~\Desktop\Hello World
chatbot>python -m spacy download en
Requirement already satisfied: en_core_web_sm==2.0.0
from
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0
in ~\appdata\local\continuum\anaconda3\envs\py36\lib\site-packages
(2.0.0)
symbolic link created for
~\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\spacy\data\en
<<===>>
~\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\en_core_web_sm
Linking
successful
~\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\en_core_web_sm
-->
~\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\spacy\data\en
You can now
load the model via spacy.load('en')
|
…
(py36) ~\Desktop\ Hello
World chatbot>python nlu_model.py
Fitting 2 folds for each of 6 candidates, totalling 12
fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend
with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 12 out of
12 | elapsed: 0.0s finished
|
…
{'intent': {'name': 'inform', 'confidence': 0.85},
'entities': [{'start': 28, 'end': 37, 'value': 'lithuania', 'entity':
'location', 'confidence': 0.91, 'extractor': 'ner_crf'}], 'intent_ranking':
[{'name': 'inform', 'confidence': 0.85}, {'name': 'goodbye', 'confidence':
0.07}, {'name': 'greet', 'confidence': 0.06}], 'text': 'I am planning my
holiday to Lithuania. I wonder what is the weather out there.'}
|
TRAINING A RASA-CORE DIALOGUE MANAGEMENT MODEL
8.
(py36) ~\Desktop\Hello
World chatbot>echo 'domain' > weather_domain.yml
“A domain is like a universe where a chatbot lives and
operates. It needs to be aware of its environment and this knowledge is present
in the ‘domain’ YML file.”
It has five main parts: “slots, intents, entities,
templates, actions”.
Slots: Slots are like place-holders to keep track of the
context of the conversation. For ex, our chat-bot needs to keep track of
location we are talking about.
“What actions should be performed on the slots that are
presently populated.”
Actions: What is the state of the conversation.
slots:
location:
type: text
intents:
- greet
- goodbye
- inform
entities:
- location
templates:
utter_greet:
- 'Hello! How
can I help?'
utter_goodbye:
- 'Talk to
you later.'
- 'Bye bye
:('
utter_ask_location:
- 'In what
location?'
actions:
- utter_greet
- utter_goodbye
-
utter_ask_location
- action_weather
|
(py36) ~\Desktop\Hello
World chatbot>python nlu_model.py
Fitting 2 folds for each of 6 candidates, totalling 12
fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend
with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 12 out of
12 | elapsed: 0.0s finished
|
…
{'intent': {'name': 'inform', 'confidence': 0.85},
'entities': [{'start': 28, 'end': 37, 'value': 'lithuania', 'entity':
'location', 'confidence': 0.91, 'extractor': 'ner_crf'}], 'intent_ranking':
[{'name': 'inform', 'confidence': 0.85}, {'name': 'goodbye', 'confidence':
0.07}, {'name': 'greet', 'confidence': 0.06}], 'text': 'I am planning my
holiday to Lithuania. I wonder what is the weather out there.'}
|
TRAINING A RASA-CORE DIALOGUE MANAGEMENT MODEL
9.
(py36) ~\Desktop\Hello
World chatbot>echo 'domain' > weather_domain.yml
“A domain is like a universe where a chatbot lives and
operates. It needs to be aware of its environment and this knowledge is present
in the ‘domain’ YML file.”
It has five main parts: “slots, intents, entities,
templates, actions”.
Slots: Slots are like place-holders to keep track of the
context of the conversation. For ex, our chat-bot needs to keep track of
location we are talking about.
“What actions should be performed on the slots that are
presently populated.”
Actions: What is the state of the conversation.
slots:
location:
type: text
intents:
- greet
- goodbye
- inform
entities:
- location
templates:
utter_greet:
- 'Hello! How
can I help?'
utter_goodbye:
- 'Talk to
you later.'
- 'Bye bye
:('
utter_ask_location:
- 'In what
location?'
actions:
- utter_greet
- utter_goodbye
-
utter_ask_location
- action_weather
|
10.
(py36) ~\Desktop\Hello World chatbot using Rasa>echo
'action' > actions.py
Call to the Web API is made here.
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
unicode_literals
from
rasa_core_sdk import
Action
from
rasa_core_sdk.events import
SlotSet
class
ActionWeather(Action):
def
name(self):
return
'action_weather'
def
run(self,
dispatcher,
tracker,
domain):
from
apixu.client import
ApixuClient
api_key
= '...'
#your
apixu key
client
=
ApixuClient(api_key)
loc
=
tracker.get_slot('location')
current
=
client.getcurrent(q=loc)
country
=
current['location']['country']
city
=
current['location']['name']
condition
=
current['current']['condition']['text']
temperature_c
= current['current']['temp_c']
humidity
=
current['current']['humidity']
wind_mph
=
current['current']['wind_mph']
response
= """It
is
currently {}
in {}
at the moment. The temperature is {}
degrees, the humidity
is {}%
and
the wind speed is {}
mph.""".format(condition,
city, temperature_c, humidity, wind_mph)
dispatcher.utter_message(response)
return
[SlotSet('location',
loc)]
|
CREATING STORIES.
11.
(py36) ~\Desktop\Hello World chatbot using Rasa>cd
data
(py36)
~\Desktop\Hello World chatbot using Rasa\data>echo 'stories'
> stories.md
Here we are going to create some stateless stories.
##
Generated Story 3320800183399695936
*
greet
-
utter_greet
*
inform
-
utter_ask_location
*
inform{"location": "italy"}
-
slot{"location": "italy"}
-
action_weather
-
slot{"location": "italy"}
*
goodbye
-
utter_goodbye
-
export
##
Generated Story -3351152636827275381
*
greet
-
utter_greet
*
inform{"location": "London"}
-
slot{"location": "London"}
-
action_weather
*
goodbye
-
utter_goodbye
-
export
##
Generated Story 8921121480760034253
*
greet
-
utter_greet
*
inform
-
utter_ask_location
*
inform{"location":"London"}
-
slot{"location": "London"}
-
action_weather
*
goodbye
-
utter_goodbye
-
export
##
Generated Story -5208991511085841103
-
slot{"location": "London"}
- action_weather
*
goodbye
-
utter_goodbye
-
export
##
Generated Story -5208991511085841103
-
slot{"location": "London"}
- action_weather
*
goodbye
-
utter_goodbye
-
export
##
story_001
*
greet
-
utter_greet
*
inform
-
utter_ask_location
*
inform{"location":"London"}
-
slot{"location": "London"}
-
action_weather
*
goodbye
-
utter_goodbye
##
story_002
*
greet
-
utter_greet
*
inform{"location":"Paris"}
-
slot{"location": "Paris"}
-
action_weather
*
goodbye
-
utter_goodbye
##
story_003
*
greet
-
utter_greet
*
inform
-
utter_ask_location
*
inform{"location":"Vilnius"}
-
slot{"location": "Vilnius"}
-
action_weather
*
goodbye
-
utter_goodbye
##
story_004
*
greet
-
utter_greet
*
inform{"location":"Italy"}
-
slot{"location": "Italy"}
-
action_weather
*
goodbye
-
utter_goodbye
##
story_005
*
greet
-
utter_greet
*
inform
-
utter_ask_location
*
inform{"location":"Lithuania"}
-
slot{"location": "Lithuania"}
-
action_weather
*
goodbye
-
utter_goodbye
|
ONLINE TRAINING LOGS
12.
Note: Before this step, “nlu_model.py” should be
executed.
(py36) ~\Desktop\Hello
World chatbot using Rasa>python train_interactive.py
INFO:rasa_nlu.components:Added 'nlp_spacy' to component
cache. Key 'nlp_spacy-en'.
Processed Story Blocks: 100%|█████████████| 10/10
[00:00<00:00, 2489.94it/s,
# trackers=1]
Processed Story Blocks: 100%|█████████████| 10/10
[00:00<00:00, 1105.33it/s,
# trackers=2]
Processed Story Blocks: 100%|█████████████| 10/10
[00:00<00:00, 904.06it/s,
# trackers=3]
Processed Story Blocks: 100%|█████████████| 10/10
[00:00<00:00, 663.61it/s,
# trackers=3]
Processed actions: 20it [00:00, 4979.58it/s, #
examples=20]
2019-03-25 11:03:48.928532: I
tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
_________________________________________________________________
Layer (type)
Output Shape
Param #
=========================================
masking (Masking)
(None, 3, 16)
0
_________________________________________________________________
lstm (LSTM)
(None, 32)
6272
_________________________________________________________________
dense (Dense)
(None, 11)
363
_________________________________________________________________
activation (Activation)
(None, 11)
0
=========================================
Total params: 6,635
Trainable params: 6,635
Non-trainable params: 0
_________________________________________________________________
INFO:rasa_core.policies.keras_policy:Fitting model with
25 total samples and a validation split of 0.1
Epoch 1/3
25/25 [==============================] - 1s 44ms/step -
loss: 2.4382 - acc: 0.0400
Epoch 2/3
25/25 [==============================] - 0s 242us/step
- loss: 2.3975 - acc: 0.1600
Epoch 3/3
25/25 [==============================] - 0s 241us/step -
loss: 2.3753 - acc: 0.1200
INFO:rasa_core.policies.keras_policy:Done fitting keras
policy model
INFO:rasa_core.training.interactive:Rasa Core server is
up and running on http://localhost:5005
Bot loaded. Visualisation at
http://localhost:5005/visualization.html.
Type a message and press enter (press 'Ctr-c' to exit).
Processed Story Blocks: 100%|██████████████| 10/10
[00:00<00:00, 995.07it/s,
# trackers=1]
? Next user input (Ctr-c to abort): hi
? Is the NLU classification for 'hi' with intent
'greet' correct? Yes
------
Chat History
# Bot
You
────────────────────────────────────────────
1 action_listen
────────────────────────────────────────────
2
hi
intent: greet 0.84
Current slots:
location:
None
------
? The bot wants to run 'utter_greet', correct? Yes
------
Chat History
# Bot
You
─────────────────────────────────────────────────────
1 action_listen
─────────────────────────────────────────────────────
2
hi
intent:
greet 0.84
─────────────────────────────────────────────────────
3 utter_greet 1.00
Hello! How
can I help?
|
13.
(py36) ~\Desktop\ Hello World
chatbot using Rasa>python dialogue_management_model.py
from
__future__
import
absolute_import
from
__future__
import
division
from
__future__
import
print_function
from
__future__
import
unicode_literals
import
logging
import
rasa_core
from
rasa_core.agent import
Agent
from
rasa_core.policies.keras_policy import
KerasPolicy
from
rasa_core.policies.memoization import
MemoizationPolicy
from
rasa_core.interpreter import
RasaNLUInterpreter
from
rasa_core.utils import
EndpointConfig
from
rasa_core.run import
serve_application
from
rasa_core import
config
logger
= logging.getLogger(__name__)
def
train_dialogue(domain_file
= 'weather_domain.yml',
model_path
= './models/dialogue',
training_data_file
= './data/stories.md'):
agent
= Agent(domain_file, policies
=
[MemoizationPolicy(), KerasPolicy(max_history=3,
epochs=200,
batch_size=50)])
data
=
agent.load_data(training_data_file)
agent.train(data)
agent.persist(model_path)
return
agent
def
run_weather_bot(serve_forever=True):
interpreter
= RasaNLUInterpreter('./models/nlu/default/weathernlu')
action_endpoint
= EndpointConfig(url="http://localhost:5055/webhook")
agent
= Agent.load('./models/dialogue',
interpreter=interpreter,
action_endpoint=action_endpoint)
rasa_core.run.serve_application(agent,
channel='cmdline')
return
agent
if
__name__
== '__main__':
train_dialogue()
run_weather_bot()
|
14.
RUNNING THE SERVER
Create “endpoint.yml” file:
action_endpoint:
url:
"http://localhost:5055/webhook/"
|
Then:
(py36) ~\Desktop\ Hello
World chatbot using Rasa>python -m rasa_core.run
-d models/dialogue -u
models/nlu/default/weathernlu/ --endpoints endpoints.yml
2019-03-26 12:01:33 INFO
root
- Rasa process starting
2019-03-26 12:01:33 INFO
rasa_nlu.components - Added 'nlp_spacy' to
component cache. Key
'nlp_spacy-en'.
2019-03-26 12:01:43.251690: I tensorflow/core/platform/cpu_feature_guard.cc:141]
Your CPU supports instructions that this TensorFlow binary was not compiled
to use: AVX2
2019-03-26 12:01:44 INFO
root
- Rasa Core server is up and running on http://localhost:5005
Bot loaded. Type a message and press enter (use '/stop'
to exit):
Your input ->
hi
127.0.0.1 - - [2019-03-26 12:01:50] "POST
/webhooks/rest/webhook?stream=true&token= HTTP/1.1" 200 190 0.202809
Hello! How can I help?
|
No comments:
Post a Comment