Difference between numpy.dot and vocab.dot method

Hello!
I want to find out how complex semantic pointer(SP) can be unbound correctly.
First, I made many strings. (ex"A1B1+C1D1+E1F1+G1H1", “A1B1+C1D1+E1F1+G1(A2B2+C2D2+E2F2+G2H2”, and so on.) Next we allocated SP for these strings using the vocab.parse method. Then, A define my SPA model which memorize SP and unbind the memory with SP with various complexities.
I finally want to use “action selection”, so I incorporated “cortex-basal ganglia-thalamus” loop in my program but this is not to my point.

import nengo
import nengo.spa as spa
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
D=800
st=0.2
SD=100
npd=50

vocab = spa.Vocabulary(dimensions=D)
vocab.add('ZERO', [0]*D)

s = []
for n in range(5):
    s.append('A'+str(n)+'*B'+str(n)+'+C'+str(n)+'*D'+str(n)+'+E'+str(n)+'*F'+str(n)+'+G'+str(n)+'*H'+str(n))

full_text=s[0]
l=len(s)
text=[]
for i in range(1,l):
    full_text=full_text[:-2]
    full_text=full_text+'('+s[i]
    tmp=full_text+')'*i
    text.append(tmp)

for num in range(len(s)):
    vocab.add('S'+str(num),vocab.parse(s[num]))
    vocab.parse('S'+str(num)).normalize()

for num in range(len(text)):
    vocab.add('T'+str(num),vocab.parse(text[num]))
    vocab.parse('T'+str(num)).normalize()

p=[]
p2=[]
p.append(s[-1])
for num in range(1,l):
    p.append(s[-1-num][:-2]+'('+p[num-1])
    p2.append(p[num]+')'*num)
    vocab.add('M'+str(num-1), vocab.parse(p2[num-1]))

model = spa.SPA(label="Simple question answering",vocabs=[vocab])

with model:
    model.text_in = spa.State(dimensions=D,subdimensions=SD)
    model.question_cue = spa.State(dimensions=D,subdimensions=SD)
    model.memory8 = spa.State(D, subdimensions=SD,feedback=1, feedback_synapse=0.1)
    model.question=spa.State(D,subdimensions=SD)
    model.out = spa.State(dimensions=D,subdimensions=2*SD)

    cortical_actions1 = spa.Actions(
        'dot(text_in, STATEMENT8) --> memory8=text_in-STATEMENT8',
        'dot(text_in,QUESTION) --> question=text_in-QUESTION'
        )
    model.bg1 = spa.BasalGanglia(cortical_actions1)
    model.thalamus1 = spa.Thalamus(model.bg1)

    cortical_actions2=spa.Actions(
        'dot(question_cue, T3) --> out = memory8*~question'
        )
    model.bg2 = spa.BasalGanglia(cortical_actions2)
    model.thalamus2 = spa.Thalamus(model.bg2)


def text_input(t):
    if t < st:
        return text[3]+'+STATEMENT8'#T3
    elif t < 2*st:
        return 'QUESTION+G0'#答えM2
    elif t < 3*st:
        return 'QUESTION+G0*G1'#答えM1
    elif t < 4*st:
        return 'QUESTION+G0*G1*G2'#答えM0
    elif t < 5*st:
        return 'QUESTION+G0*G1*G2*G3'#答えS4
    elif t < 6*st:
        return 'QUESTION+G0*G1*G2*G3*G4'#答えH4
    else:
        return 'ZERO'


def question_cue_input(t):
    if t < st:
        return 'ZERO'
    elif t < 6*st:
        return 'A0*B0+C0*D0+E0*F0'
    else:
        return 'ZERO'


with model:
    model.inp = spa.Input(text_in=text_input, question_cue=question_cue_input)


with model:
    model.config[nengo.Probe].synapse = nengo.Lowpass(0.03)
    text_in = nengo.Probe(model.text_in.output)
    question = nengo.Probe(model.question.output)
    question_cue = nengo.Probe(model.question_cue.output)
    memory8 = nengo.Probe(model.memory8.output)
    out = nengo.Probe(model.out.output)

with nengo.Simulator(model) as sim:
    sim.run(1.2)

vocab = model.get_default_vocab(D)

def calc_thousand(n):
    return int(1000*n-1)

real_times=[0.4,0.6,0.8,1.0,1.2]
data_number=[]
for time in real_times:
    data_number.append(calc_thousand(time))

for time in data_number:
    print('t= %1.2f' %((time+1)/1000))
    for voc in vocab.keys:
        #This is np.dot method.
        print(voc, np.dot(vocab.parse(voc).v, sim.data[out][time]))
    #This is vocab.dot method.
    similarity_list=vocab.dot(sim.data[out][time])
    max_index = np.argmax(similarity_list)
    print('max similarity : %1.3f' %similarity_list[max_index])
    print('\n')

The result was as follows.

t= 0.40
ZERO 0.0
A0 -0.0711598920314
B0 -0.0185538433031
C0 0.0653586916782
D0 -0.013633515241
E0 0.0295124452548
F0 0.0613176793024
G0 0.0901809599379
H0 -0.00736558012099
S0 -0.00281064030801
A1 0.00515122321343
B1 -0.0295642218196
C1 0.131872616376
D1 -0.0222840861488
E1 0.00141004869297
F1 -0.0238852428956
G1 0.00868908415146
H1 -0.0353239898149
S1 0.253013098467
A2 0.0181090728771
B2 -0.0416298771325
C2 0.0313327791915
D2 -0.0161826744954
E2 -0.000122356649961
F2 -0.016780610398
G2 -0.0155523830888
H2 -0.00236038535522
S2 0.0940145924772
A3 -0.0104298031901
B3 -0.00202432658138
C3 0.0843785510704
D3 0.0384394179257
E3 -0.00896554157488
F3 -0.0864632560742
G3 0.106091607701
H3 -0.0565700974051
S3 -0.0255238183447
A4 0.0421429863804
B4 0.0146193121551
C4 -0.0332112234137
D4 0.00415862949778
E4 -0.0110309424435
F4 0.000424725967714
G4 -0.0240098112492
H4 0.0456887710921
S4 0.0199353197718
T0 -0.0228629007518
T1 0.0112987027612
T2 0.00499547372896
T3 -0.0301468901552
M0 -0.0547080368833
M1 0.202839850707
M2 1.84209742152
M3 -0.122710439125
STATEMENT8 -0.0425633568342
QUESTION 0.0174397122717
max similarity : 1.842


t= 0.60
ZERO 0.0
A0 -0.0278665201957
B0 0.0240332041731
C0 0.0589865530933
D0 0.0230770539884
E0 -0.00696448393608
F0 0.0568741854579
G0 0.0385507267892
H0 -0.0410206500367
S0 -0.017554138538
A1 -0.0133498703302
B1 0.000343533029552
C1 0.0533027979566
D1 0.0396341523421
E1 0.0467445920439
F1 -0.0507407505084
G1 -0.0288505982656
H1 -0.00394885452715
S1 0.0507968238018
A2 -0.0204760494146
B2 0.00717196000512
C2 0.00537085600671
D2 -0.033350185064
E2 -0.00148493677888
F2 0.0377738027241
G2 -0.0230904596924
H2 0.0219564060618
S2 0.176970527271
A3 -0.00454961346886
B3 -0.0413563104307
C3 -8.42615734456e-05
D3 0.00676974666215
E3 -0.0413029031801
F3 -0.0428660927725
G3 -0.00160906607978
H3 -0.000575685356943
S3 0.0579483478679
A4 0.0283471551793
B4 0.0293545606648
C4 0.0382197606238
D4 0.0545515771
E4 0.0353755019301
F4 0.0308050579436
G4 -0.0144985802592
H4 0.0441775460888
S4 0.0994793464571
T0 -0.0457802593745
T1 0.0114889181017
T2 0.0293247358106
T3 -0.162235056831
M0 0.283643113934
M1 1.26814093594
M2 0.238890696349
M3 -0.66036446754
STATEMENT8 -0.0759914622915
QUESTION 0.00818457363144
max similarity : 1.268


t= 0.80
ZERO 0.0
A0 -0.0437969776809
B0 -0.0188370880816
C0 0.0189561022941
D0 0.0361657890768
E0 -0.000529991145057
F0 -0.0113656726164
G0 0.016535231054
H0 -0.0395959693753
S0 -0.0667444613071
A1 -0.0161359212997
B1 0.0102479909279
C1 0.0401815164798
D1 0.0222926984252
E1 0.0493406045061
F1 0.0463220797267
G1 0.00217822779686
H1 -0.046115492742
S1 -0.0257013668857
A2 0.000754859855549
B2 0.0244757045959
C2 0.011578747808
D2 0.0184545837683
E2 0.0414926411591
F2 0.0301639390997
G2 -0.000731269536809
H2 0.0184975808374
S2 -0.00184193038178
A3 0.0166144680838
B3 -0.0603712197458
C3 0.0468281454564
D3 0.0126188782766
E3 0.0372145095472
F3 -0.0310048213335
G3 0.0423318757885
H3 0.0470930156964
S3 0.14336458902
A4 -0.0459411364887
B4 0.069811098882
C4 0.00769104705769
D4 0.0292168867249
E4 0.000124476912086
F4 -0.0684403919578
G4 0.00917500638893
H4 0.0425895765141
S4 -0.000794315048761
T0 -0.114107971494
T1 -0.131638007646
T2 -0.143087409769
T3 -0.112109097975
M0 0.864577949649
M1 0.309068192238
M2 0.210703480889
M3 -0.456330871001
STATEMENT8 -0.00709235615987
QUESTION 0.04066880401
max similarity : 0.865


t= 1.00
ZERO 0.0
A0 -0.0402442667418
B0 -0.0449785915291
C0 -0.000332951284337
D0 -0.025927264662
E0 -0.0277345995915
F0 -0.0099958745592
G0 0.0347401616691
H0 0.00778696950691
S0 -0.0517420754629
A1 -0.00911327250356
B1 -0.0527956229265
C1 -0.0142585907022
D1 0.0169724674401
E1 0.0192538666359
F1 -0.0289618004961
G1 -0.00251157550182
H1 0.0347740487716
S1 -0.0192515561088
A2 -0.00553130408713
B2 -0.0525913382355
C2 0.0227008026361
D2 -0.048490201961
E2 -0.0245210925694
F2 -0.0481045454102
G2 -0.0262953743352
H2 0.0215959348682
S2 0.0321084770362
A3 0.00423891382021
B3 -0.0261535839563
C3 -0.00943050899303
D3 -0.00325095562056
E3 -0.0206207804024
F3 0.0004125583132
G3 0.0154285595266
H3 -0.015547721094
S3 -0.0058482914594
A4 0.0688829338885
B4 -0.0758922789414
C4 0.000820211516831
D4 0.0970465559845
E4 -0.0521099957675
F4 0.00710022719115
G4 0.0427089431193
H4 -0.0274840733367
S4 0.242519142503
T0 -0.036756418862
T1 -0.122766643537
T2 -0.0307327370849
T3 -0.223071500115
M0 -0.021662837013
M1 0.106061737469
M2 0.385817316688
M3 -0.907994210831
STATEMENT8 -0.0357566855238
QUESTION -0.0388896390911
max similarity : 0.467


t= 1.20
ZERO 0.0
A0 -0.0303352651665
B0 0.0200546864392
C0 0.0316485169888
D0 -0.0230741688917
E0 0.00232821194414
F0 -0.0177073401057
G0 0.0170423729981
H0 -0.0478041746972
S0 -0.048398961298
A1 -0.0173442642574
B1 -0.0156276238295
C1 -0.0224463165969
D1 -0.00146564521862
E1 0.0126259635858
F1 0.000796187809248
G1 0.0289837057927
H1 -0.0636139187164
S1 0.0428031556857
A2 0.0425360037895
B2 0.0315063208438
C2 -0.0198731719423
D2 -0.00562731639185
E2 0.0139946060688
F2 -0.0380789902123
G2 0.010556298673
H2 -0.0201463732867
S2 0.0332865247926
A3 0.0280202358217
B3 -0.0272844014648
C3 -0.0275323212241
D3 -0.0278086969366
E3 0.0663148750051
F3 -0.0013681474876
G3 0.00760057395073
H3 0.0233048864267
S3 -0.0139097874557
A4 -0.0162206219162
B4 0.0548015154105
C4 0.0135435359835
D4 -0.0139909877429
E4 0.0478573349536
F4 -0.0513490517526
G4 0.0337232194622
H4 0.0858004237428
S4 -0.000256391269219
T0 -0.0939064434377
T1 -0.0625499302916
T2 -0.148151793463
T3 -0.00153715084305
M0 0.334835790473
M1 0.0448885240631
M2 0.249516224391
M3 -0.00625684619479
STATEMENT8 -0.00280967951596
QUESTION 0.0109087820963
max similarity : 0.335

I want to display similarity between original SP and simulation data. There are two ways to do it. The first, to use “np.dot” and the second, to use spa.similarity method.
I wanted to make the value of “max similarity” in my program the maximum value of similarity.
In the first three examples, the two similarities were equal, but in the other two examples, the value of “max similarity” is different from the value of the maximum value of similarity.

I don’t know why…, please any advice. Thank you.

I’m sorry. I made a mistake.

(ex"A1*B1+C1*D1+E1*F1+G1*H1", “A1*B1+C1*D1+E1*F1+G1*(A2*B2+C2*D2+E2*F2+G2*H2”, and so on.)
not
(ex"A1B1+C1D1+E1F1+G1H1", “A1B1+C1D1+E1F1+G1(A2B2+C2D2+E2F2+G2H2”, and so on.)

Hi!
I went over your code briefly, and it looks like there’s been a misalignment introduced somewhere between the SP values stored in vocab.vectors and those stored in the dictionary vocab.pointers. I haven’t looked in detailed to find where this occurred, but you can see that this is the case by doing the following:

for index, key in enumerate(vocab.keys):
    v1 = vocab[key].v
    v2 = vocab.vectors[index, :]
    print(np.allclose(v1, v2))

You’ll notice that occasionally the vectors in each case are different, which means the dot products you are computing using each method will be different. More generally, the data associated with each semantic pointer in a vocabulary is stored in two places - a dictionary that maps keys to SemanticPointer objects (i.e. vocab.pointers) and an array whose rows correspond to the vector values of these SemanticPointer object (i.e. vocab.vectors). When you add a semantic pointer to the vocabulary by calling add (or parse with a new key), the newly created SemanticPointer object is added to the dictionary, and the underlying vector is added as a new row in vocab.vectors. So, if you make changes in one place but not the other after the fact, you can have confusing results of this sort occur.

As an aside, I believe vocabs are designed to be read-only in part due to this issue in the new nengo_spa library (https://github.com/nengo/nengo_spa), as discussed here: https://github.com/nengo/nengo/pull/953. This new library is under active development and is not fully documented yet, but may be of interest to keep an eye on.

The culprit is the normpalization of the vectors. Lines like

vocab.parse('S'+str(num)).normalize()

will change the vector stored in vocab.pointers, but not the one in vocab.vectors. To fix this you can change your code segments where you add and normalize vectors to something like this:

p = vocab.parse(s[num])
p.normalize()
vocab.add('S'+str(num), p)

Is @pblouw mentioned, in the redesigned nengo_spa library SemanticPointer objects are immutable and
p.normalize() will not change the instance p, but return a new instance. Furthermore, adding normalized vector in the way you are doing will be easier. In the new library, you could use this code:

vocab.populate('S{} = ({}).normalized()'.format(num, s[num]))

However, the new nengo_spa is not stable yet: there will be some major changes upcoming in the syntax in the next months. Also, the documentation needs to be improved.

Thank you for your answers!
The vocab.populate() method looks better I think. I’m looking forward to next month.