Using Nengo in agent based modeling

Hello

I am trying to integrate the python-based Nengo with an agent based modeling (ABM). The idea is that I want each agent to be controlled by a neural network, which will be modifiable, adaptive and evolvable. This means that on each time step the input that the agent senses from the environment will be passed onto the neural network and the output will be executed as action in the ABM. The neural network will function in continuous time and its state will be preserved until the next time step of the ABM simulation.

Up to now I am using Repast Simphony which is written in Java. Ideally I would like to build my neural network in Nengo and call it through the java code to use it in Repast and vice versa inspect any modifications, learning or evolution of the NN back in Nengo. Up to now I was using some java NN libraries (Simbrain) and writing the code straight in Java but it seems Nengo can be more appropriate for my goals.

What would be the best way to integrate Nengo to a java application, so that I can have several discrete continuously running NNs and at each time step :

  1. modify the stored network through reinforcement learning based on the outcome of the previous action
  2. feed with input and retrieve the output
  3. preserve the NN state until the next time it is called through the java code of the ABM simulation
  4. access any parameter of the network in real time so that I can create new slightly modified NNs when new agents spawn in the ABM simulation

How would you implement such an integration?
Any suggestions or ideas are welcome.

Thank you for your time

1 Like

Thanks for switching from the mailing list to the forum!

How much about Nengo do you know? I ask, because I don’t want to go on about reinforcement learning in Nengo if you already know all about it.

In terms of interfacting with Nengo, once you build a model, it’s very easy to control the flow of information through the model. Additionally, with the nengo.Node() object you can input any array-like data you want into the model. To illustrate my point I’ve created some semi-functional code below:

# build a model of somesort
model = nengo.Network()
...

# make a simulator object
sim = nengo.Simulator(model)

while True:
    # get data from Java program somehow using inter-process communication
    # maybe a pipe or a shared file?
    data = get_data_from_java()
    model.node.input_data(data)
    # increment a time-step in the model
    sim.step()

The simulator object also holds any parameter you desire for the new agents.

This example isn’t supposed to make a ton of sense to you, I just wanted to show how it would be possible. Was it helpful? Would you like further details about Nengo’s capabilities or would you like more tips on how to get started making models with Nengo? Any other questions?

1 Like

Thank you Sean for the feedback!

All I know about Nengo is what I read in the “How to build a Brain” book
and the 25 tutorials provided. I have not explicitly studied the
reinforcement learning in Nengo because my primary concern for the moment
is whether it is suitable for my purposes. As for the learning rule, until
now I was using MSTDPET (Reinforcement Learning Through Modulation of
Spike-Timing-Dependent Synaptic Plasticity by Florian 2007) although PES
seemed also appropriate.

To make it clear:
The agent-based model (ABM) will be a java application with lots of agents
(e.g. 500) that are born, live and die throughout the simulation.
Each agent will have a discrete Nengo model which has to be created on its
birth, based on the parameters of its parents’ model, accessed and stored
at each time step and maybe discarded on death.

So in terms of the way Nengo will be called by the Java code there will be
discrete functions executed at arbitrary time points :

Called once
Function: Birth
Arguments: Model parameters
Return: Nengo model
Explanation: The java function must call Nengo to build a model with the
specified parameters and store it in the most accessible way for the
subsequent function

That would be your

# build a model of somesort
model = nengo.Network()
...

# make a simulator object
sim = nengo.Simulator(model)

Called each timestep
Function: Decide
Arguments: Nengo model, Input from environment
Return: Updated Nengo model, Output to environment
Explanation: The java function must call Nengo to load the stored model,
modify synapses based on prediction error, give the input, run the
simulator, retrieve the output and store the updated model

That would be your

while True:
    # get data from Java program somehow using inter-process communication
    # maybe a pipe or a shared file?
    data = get_data_from_java()
    model.node.input_data(data)
    # increment a time-step in the model
    sim.step()

I would imagine that the best way to achieve this is to never terminate and
store in a file but just have 500(!) Nengo simulators running in the
background and access the right one each time an agent in the Java
application has to act. But I don’t really know if that would be possible.
So we need to answer the following questions:

  1. Is it possible to have so many models active independently at the same
    time? Of course without the GUI interface, just as processes.
  2. Is it possible to pause them and resume them at any time? I guess the
    sim.step function does exactly that, right?
  3. Is it possible to retrieve data (output or parameters) at any time point
    and then resume from where they stopped?

As for the model itself now I would appreciate any help/suggestion/idea on
how to implement it in Nengo as I am totally newbie to it.

I think that it would need:

  • Associative memory component with ongoing learning(Odor vector [10] to Taste vector [10])
  • Controlled Associative memory component with learning based on prediction error(Taste vector and energy deficit[1] to predicted reward/punishment vector [2] modulated by prediction error[2])
  • Decision-making component (Odor vector [10], Taste vector [10], Predicted
    reward/punishment[2] to goal-oriented actions[4&blank]) where actions are
    feed, ascent odor gradient (follow the smell of rewarding food), descent
    odor gradient (avoid the smell of predator), explore (try to find something
    rewarding to do!), wait (no-goal)
  • Controlled Motor component (action to move/turn right/turn left elementary
    motor action)

Thanks in advance

Panos

You can check my profile here and link to me
https://www.linkedin.com/in/sakagiannis-panagiotis-16055647/

Is it possible to have so many models active independently at the same
time? Of course without the GUI interface, just as processes.

Definitely. You can start them as separate processes easily by running python model.py.

Is it possible to pause them and resume them at any time? I guess the sim.step function does exactly that, right?

Correct!

Is it possible to retrieve data (output or parameters) at any time point and then resume from where they stopped?

Yep. After each sim.step you can access the data created by the simulator using the sim.data object.

As for making the actual agent and playing with it, I think @tcstewar might have a good starting point for you. I seem to recall him making a 2D agent doing a similar thing.

2 Likes

Some questions concerning the integration of Nengo with other python processes.
Follow the pipeline :

1.

We have a python class that when instantiated with some arguments builds a specific nengo network with a specific name (model1). The network contains always:

stim = nengo.Node(size_out = 2)
response = nengo.Node(size_out = 2)
We build also a model1_sim =  nengo.Simulator(model1, dt=0.001).

2.

We repeat the first step 10 times building model2, model3 etc…

3.

At arbitrary times (controlled by other python processes) for a specific network (e.g. model5) we do the following:

We set externally the output of the stim Node giving it a vector with 2 elements [x, y] :
model5.stim.output = [x,y]

We run the appropriate simulator for z timesteps:
model5_sim.step(z)

We retrieve externally the output of the response Node through a p_response probe:
model5_sim.data.p_response

4.

We move on and do the same to another network until we cycle through all 10.

5.

We once again run the model5_sim as in step 3.

Is this pipeline achievable? My main concern is this:
Is it possible to perform Step 3a: set the output of a node externally, without resetting the network? We want to keep everything as it was at the end of a simulation step apart from changing the stim output. If not, how could we achieve that?

Yes, it is possible to get external output without resetting the network. I’ll make an example tomorrow!

In my case I want to provide externally the output of a node (therefore provide input to the network) without resetting the network.
I can’t do that through a probe obviously

And that must be dynamic. I wouldn’t want to retrieve it from a file, rather provide it before using sim.step

I also checked the procedure described in nengo.network.py where it is described how to use a separate class for reusable components of a network.
But even so, these separate components will be incorporated to the network once, when building it. How to modify them once they are in the network, which has already started stepping?

I think the utility I search for is like having an external slide-bar as in the GUI : it permits changing a node output in real time. How could we give this control to an external process?

Here’s the example:

import nengo

import matplotlib.pyplot as plt

class InputManager(object):
    """because we need to contain state, the easier way to do that in
    Python is to make a class"""

    def __init__(self):
        self.state = 0

    def get_input(self, modifyer):
        """you can modify the state value or over-write it here
        or you can just modify the state parameter directly"""
        print("Manage the input here.")
        self.state += modifyer

    def return_output(self, t):
        return self.state

im = InputManager()

with nengo.Network() as model:
    in_nd = nengo.Node(im.return_output)
    ens = nengo.Ensemble(n_neurons=500, dimensions=1)

    nengo.Connection(in_nd, ens)

    p_ens = nengo.Probe(ens, synapse=0.01)

with nengo.Simulator(model) as sim:
    for i in range(2000):
        sim.step()
        if i == 300:
            # modification via function argument
            im.get_input(0.5)
        elif i == 1000:
            # modification by overwriting a property
            im.state = 0.7

plt.plot(sim.trange(), sim.data[p_ens])
plt.show()

I tried to show how making a class gives you basically limitless control over what type of modification to the node input you want to make. Let me know if anything isn’t clear or needs further explanation.

3 Likes

Awesome! It is crystal clear and swift!
Thank you so much Sean you 've been a great help!
I 'll try to implement it right away.

Why is this code not working?
I need to build the network inside a method and return it
It returns : NameError: name ‘p_ens’ is not defined

import nengo
import matplotlib.pyplot as plt

class InputManager(object):
    """because we need to contain state, the easier way to do that in
    Python is to make a class"""

    def __init__(self):
        self.state = (0)

    def get_input(self, modifyer):
        """you can modify the state value or over-write it here
        or you can just modify the state parameter directly"""
        print("Manage the input here.")
        self.state += modifyer

    def return_output(self, t):
        return self.state

im = InputManager()
def build_model():
    model = nengo.Network()
    with model:
        in_nd = nengo.Node(im.return_output)
        ens = nengo.Ensemble(n_neurons=500, dimensions=1)

        nengo.Connection(in_nd, ens)

        p_ens = nengo.Probe(ens, synapse=0.01)
        return model 
modelB = build_model ()
sim = nengo.Simulator(modelB)
with sim:
    for i in range(2000):
        sim.step()
        if i == 300:
            # modification via function argument
            im.get_input(0.5)
        elif i == 1000:
            # modification by overwriting a property
            im.state = 0.7
plt.plot(sim.trange(), sim.data[p_ens])
plt.show()

Hi @bagjohn,

I believe that you’re getting this error because the model (and p_ens) is defined inside a function, and so you don’t have access to it outside that function. The way to get around this would be to attache p_ens to model. So changing the line where you define it to

model.p_ens = nengo.Probe(...)

and where you’re plotting to

plt.plot(sim.trange(), sim.data[modelB.p_ens)

With the changes it now runs for me!
Cheers,

edit: modified code attached

import nengo
import matplotlib.pyplot as plt

class InputManager(object):
    """because we need to contain state, the easier way to do that in
    Python is to make a class"""

    def __init__(self):
        self.state = (0)

    def get_input(self, modifyer):
        """you can modify the state value or over-write it here
        or you can just modify the state parameter directly"""
        print("Manage the input here.")
        self.state += modifyer

    def return_output(self, t):
        return self.state

im = InputManager()
def build_model():
    model = nengo.Network()
    with model:
        in_nd = nengo.Node(im.return_output)
        ens = nengo.Ensemble(n_neurons=500, dimensions=1)

        nengo.Connection(in_nd, ens)

        model.p_ens = nengo.Probe(ens, synapse=0.01)
        return model 

modelB = build_model ()
sim = nengo.Simulator(modelB)
with sim:
    for i in range(2000):
        sim.step()
        if i == 300:
            # modification via function argument
            im.get_input(0.5)
        elif i == 1000:
            # modification by overwriting a property
            im.state = 0.7
plt.plot(sim.trange(), sim.data[modelB.p_ens])
plt.show()
1 Like