# Forcing a transform before a function

According to the documentation, when you specify a function and a transform, the transform is applied after the function. Is there anyway to force the inverse? That is, so that I have a transform and the output of the transform applied to my `pre` is the input to the function. In turn, the output to the function is my connection to the `post`.

I can think of a couple of work-around, like putting in an interim neuron population, but I was hoping to not have to resort to that.

Perhaps you can put the transform inside the function itself?

Hmm, the transform is an operation in vector space, and the function is computed off of the activities of the neurons and takes you in to that vector space. You could add in the transform to the function, but if you wanted to do the transform first you’d be in neuron space, which proooobbabbly isn’t where you want to apply it?

Thanks for helping me out guys.

In my case, I’m using the transform to map from the neurons to a vector space and then I’m applying a function on that vector space, so I can’t just put the transform into the function, because functions can’t be applied to neuron objects.

So you are attempting to calculate a function off the neural (spike) activity of an ensemble? In that case, you will need an intermediary ensemble.

The decoders are applied to the neuron activities, but the function is specified entirely in vector space. So it’s easy to have a vector space transform before a function, simply by putting that linear transform at the beginning of the function specification. (i.e. anything a linear transform can do, a nonlinear function can also do. In fact, for ensemble->ensemble connections, we don’t really need the `transform` parameter, since you can always just tack that linear transform on to the end of your function. It’s just a shorthand.)

But this doesn’t sound like what @Seanny123 wants to do. I’m not quite following what he wants to do. Let me explain why I’m confused.

Normally, we have vector spaces A and B, and a function f(x) that maps from A to B. If we have an ensemble, then the ensemble encodes vectors in space A, and we solve for decoders that produce f(x) in space B. It sounds like @Seanny123 has a transform T that maps from neuron outputs to another vector space C, and a function g(y) that maps from C to B that he wishes to compute. But it’s not clear to me how this is going to work, since the eval points are in space A. I don’t see how we can do this unless we have a mapping (linear or nonlinear) from A to C, and if we do, then we’re back to something like what I described in the first paragraph. So what is it you want to do, exactly, @Seanny123? What space is your function defined in? No matter how you cut it, it seems to me to use the NEF the domain of your function has to be space A.

yeah for sure, sorry the distinction i was more shooting to make is that the function is in the decoders, and the transform we consider normally outside of the decoders, `w_ij = d_i * T * e_j`, so in that sense you can’t do `w_ij = T * d_i * e_j` with `T` in vector space.