Hi !

I’m trying to use gyrus to calculate inverse kinematics, using the jacobian matrix.

In order to use the jacobian, I need (among other things) `np.linalg.pinv`

but gyrus doesn’t support that so I used the example from here in my program.

I don’t think my program is too complicated but It is taking FOREVER to run a 1-second simulation (around 40 minutes)

I’ve tried to diminish the number of neurons from 1000, but it’s far less accurate and still, takes a long time.

a bit from the code:

I define the end-position as A, and the angles to get to that point as q

```
A = np.array([0.47833606, 0.46394772, 0.39643308], dtype='float')
q = np.array([-0.69066467, -0.20034368, 0.28437363, 0.00342465, 0.10304996], dtype='float')
```

then, I use the function `gyrus_calc`

to calculate **q_hat** according to A only, using the jacobian matrix

```
def gyrus_calc(A, q_hat, dt, synapse=None):
"""Compute q according to Jacobian matrix."""
J = gyrus.stimuli(calc_J(q_hat)).configure(
n_neurons=1000,
input_magnitude=2,
seed=0)
J_pinv_hat =gyrus.stimuli(np.zeros_like(calc_J(q_hat)).T)
## np.linalg.pinv(calc_J(q_hat)) ##
J_pinv = gyrus_inverse(J, J_pinv_hat, dt)
return q_hat.integrate_fold(
integrand=lambda q_hat: dt * J_pinv.dot(A-calc_T(q_hat)) / 1e-3,
synapse=synapse,
)
```

while `gyrus_inverse`

function uses `gradient`

function to implement `np.linalg.inv`

```
def gradient(A, M):
"""Compute the gradient of M approximating inv(A)."""
I = np.eye(A.shape[1])
return 2 * (M @ A - I) @ A.T
def gyrus_inverse(J, J_pinv_hat, dt, synapse=None):
"""Compute the inverse of J by gradient descent from J_pinv_hat."""
return J_pinv_hat.integrate_fold(
integrand=lambda J_pinv_hat: -dt * gradient(J, J_pinv_hat) / 1e-3,
synapse=synapse,
)
```

In order to run the simulation I `run`

**op** which I defined as:

```
op = gyrus_calc(
A=gyrus.stimuli(A).configure(
n_neurons=1000,
input_magnitude=2,
seed=0,
),
q_hat=gyrus.stimuli(np.zeros_like(q)),
dt=0.01,
).filter(0.2)
```

**Is it justifying a 40-minute run?**

Thanks a lot,

Yuval