# Word embedding input for nengo model?

#1

I am trying different Nengo model for classification , I want to try sentiment analysis using Nengo model , But nengo neurons are optimized to represent values between -1 to 1 by default , It means i have to provide continuous values as input but In sentiment i have text so can i use word embedding for Input , something like :

Ex: This is demo sentence.

converted into int ids . == > . [ 23 , 24 , 25 , 26 ]

since those are discrete values then :

Word_embedding ( int_ids ) ===> [ [ 0.11 , 0.43 , 0.14 …] , [ 0.2 , 0.54 , 0.1 …] , [ 0.43 , 0.22 , 0.14 …] ,[ 0.13 , 0.10 , 0.14 …] ]

Now this is 2d vector now if i have sentiment classes are like :

[ [ 0.11 , 0.43 , 0.14 …] , [ 0.2 , 0.54 , 0.1 …] , [ 0.43 , 0.22 , 0.14 …] ,[ 0.13 , 0.10 , 0.14 …] ] ==> [ 0,1]

So will it work ? I have doubt since this is 2d vector so it will treat each element in vector as individual or it will treat it as element of same vector ?

Second question is How we can feed values in batch to Nengo model ?

#2

Hi @Monk,
There are a couple of different ways to approach this problem. The simplest is probably to map your integer ids to vectors beforehand (i.e. do the embedding lookup), and then provide either their concatenation or sum as input to the model. So, if your embeddings are each 50 dim, you could sum them up and provide them as input to a 50D ensemble or ensemble array in Nengo. Then, you can solve for decoders that try to map between a bunch of these inputs and their corresponding sentiment ratings. To do this, you can connect your ensemble to an output node, and specify the function you want decode on this connection with a line like the following:

`nengo.Connection(ensemble, output_node, eval_points=embeddings, function=ratings)`

where `embeddings` is the collection of inputs to your model and `ratings` is the collection of sentiment ratings corresponding to these inputs.

Beyond this starting point, you might look at using Nengo DL to implement a more sophisticated model that is trained end-to-end using gradient-based algorithms. Nengo DL will also let you feed batched inputs into your model, unlike normal Nengo.

Finally, Nengo SPA might also be worth checking out, as it will let you map between symbolic expressions (e.g. ‘SentenceA’) and underlying vector representations in a way that let’s you conveniently construct fairly complicated models that will manipulate these representations in a variety of different ways.

Anyway, hopefully these are some useful pointers - feel free to let us know if you have any follow-up questions.