Sunday, November 13, 2022

Machine Translation (NLP) Quiz of 12 Questions

Q1: If 'm' is the input vector size, and 'O' is the size of the output vector. What is the number of parameters in the LSTM networks.

a) 4.(mh + h^2 + h)
b) 4mh + h^2 + h 
c) 4(mh + h^2 + 2h)
d) Cannot be determined

Q2: Multiple Choice Correct
Which of the following statements are true about the projection layer in LSTM? 

a) It is a neural network later with non-linear activation layer.

b) It is a neural network layer without the non-linear activation function.

c) A projection layer involves a simple matrix multiplication.

d) The projection is used to convert an n-dimensional discrete vector to a lesser than n-dimension vector.

Q3: What are the internal vectors computed by the encoder of LSTM model?

a) Context vector
b) Hidden state 
c) Cell state vector 
d) Both B and C 

Q4: Multi-headed attention is defined as:
a) The parallel processing of several sets of attention layers
b) Self-attention layers working sequentially
c) Self attention layers with multiple headed output 
d) None of the above

Q5: The dot product of QXKT results in?
a) Square matrix 
b) Correlation matrix indicating the word to word relation
c) Both A and B
d) None of the above.

Q6: Are the outputs of the encoder in LSTM model fed into the decoder?
a) Yes
b) No 

Q7: Number of feed forward layers in the encoder of basic model of BERT is:
a) 768 
b) 1024
c) 12
d) 512

Q8: Select the method used by transformer.

a) Recurrence
b) Attention 
c) Both A and B
d) None of the above 

Q9: Which of the following datasets are used to train the BERT?
a) Wikipedia and Books
b) Encyclopedia
c) It depends on the task 
d) It depends on the users choice 

Q10: Which of the application can be built using Transformers?
a) Language modeling 
b) Translation 
c) Classification
d) All of the above 

Q11: Suppose two sentences are given as below:
"I go to the the bank to draw money."
"I sit on the river bank."

In both sentences, the word "Bank" is used in different contexts. Suppose you build a model separately using word2vec and BERT for word embedding. What output you expect from both:

a) The embedding produced by word2vec for the word 'bank' in both the sentences will be same while BERT results in different for each sentence.

b) Both model results in the same word embedding for the 'bank'. 

c) No, can't be used for the cases given in the question.

d) No, word2vec can't be used for the cases given in the question.

Q12: Which of the following are the usecases of machine translation?

a) Avoid language barriers in communication

b) Provide access to information that is in unknown language.

c) Enhance consumer service through domain specific machine translation 

d) All of the above. 
Tags: Technology,Natural Language Processing,

No comments:

Post a Comment