Welcome, guest | Sign In | My Account | Store | Cart

Get the 3x3x3 (27 element) trifocal tensor from two 3x4 camera matrices.

T[i,j,k] = A[j,i]*B[k,4] - A[j,4]*B[k,i]
Python, 11 lines
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
def tensor(a,b):
    '''Returns the 3x3x3 (27 element) trifocal tensor given two 3x4 camera matrices.
    '''
    T = zeros((3,3,3))
    for i in range(3):
        T[i] = outer(a[:,i],b[:,3]) - outer(a[:,3],b[:,i])
    return T

# OR, given A and B 3x4 arrays

T = array([(outer(a[:,i],b[:,3]) - outer(a[:,3],b[:,i])) for i in range(3)])

11 comments

ltp 11 years, 10 months ago  # | flag

Hi, That might not change anything here (since the matrices are very small), but explicite "for" loop are really slow in python. If you can, switch to some numpy array representation to speed up your code.

I actually tried to make some working code with a and b, the input matrices:

A = mat(a[:,:3].reshape(9,1)) B = mat(b[:,:3].reshape(1,9))

A3 = mat(a[:,3]).T B3 = mat(b[:,3])

T = array(AB3-(A3B).T).reshape(3,3,3)

The speed up improvement is not that great (my code IS dirty, this is just to introduce matrix notation) BUT this algorithm gives different outputs than yours. I mean, not different placements of the numbers, different numbers. My algorithm might be wrong. I'll investigate that we I am less busy.

J W J (author) 11 years, 10 months ago  # | flag

Thanks for the comment. The problem is that this is a dynamic summation (can I call it that?) and I don't think it can be done with out a few for loops. I tried to speed it up like this:

T = zeros((3,3,3))
for i in range(3):
    for j in range(3):
        for k in range(3):
            T[i,j,k] = sum(outer(a[:j+1,i],b[:k+1,3]) - outer(a[:j+1,3],b[:k+1,i]))

...same result but this actually took twice as long when I tested it! for t in range(10000):

ltp 11 years, 10 months ago  # | flag

Come on ! Don't be silly: matrices ARE sums and multiplications. This is the whole point! Each time you write a simple sum, you can write it as a matrix inner or outer multiplication.

That said, I want to more clearly lay the emphasis on the two points I have raised:

1) Concerning code optimization, your code should do just fine in C or in FORTRAN. This is the way it works (unfolding matrices into sums of sums). However, this is highly inefficient in pure python. For example, you can consider full "i" columns instead of elements. In other words,

T = zeros((3,3,3))
for j in range(3):
    for k in range(3):
        T[j,k] = sum(outer(a[:j+1,:],b[:k+1,3]) - outer(a[:j+1,3],b[:k+1,:]))

with a little more tweaks should be working perfectly well. However, this was NOT the main point.

2) I don't know anything about trifocal tensors. I first rewrote your program as an equation (to do some algebra and reduce it to a simple matrix multiplication). The transformations "a" and "b" had to undergo were huge. So I assumed you made a mistake (so easy when you unfold everything) and got back to the real mathematical expression given there: http://en.wikipedia.org/wiki/Trifocal_tensor#Correlation_slices

Please tell me if I am doing something wrong, but to me, the Ti equation appears like this in python:

T[i] = a[:,i]*B[:,3].T - A[:,3]*B[:,i].T

Doing a simple loop for i in [0, 1, 2] gives your three Ti. What I was saying is that these matrices are different than yours. Actually, yours is flipped: the first column of my T1 (T[0]) is composed of the 1st element of your T1, T2, T3 matrices respectively. But I might be wrong.

My full code is :

def tensor(a, b):
    T1 = a[:,0]*b[:,3].T - a[:,3]*b[:,0].T
    T2 = a[:,1]*b[:,3].T - a[:,3]*b[:,1].T
    T3 = a[:,2]*b[:,3].T - a[:,3]*b[:,2].T
    return [T1, T2, T3]

On my computer, it runs 6 times faster than your algorithm. For such small matrices, this is quite surprising (I would have guessed only a 2x factor). But as you matrices size increases, this factor should rise.

To sum up: I am not sure your algorithm gives you the expected results AND please keep in mind the fact that numpy arrays/matrices are faster than plain python AND that any algorithm using sums and multiplications can always be written as matrices.

As an exercices, you could try to reshape your input matrices in order to find "c" and "d" defined as:

c[i]d[i] = a[:,i]b[:,3].T - a[:,3]*b[:,i].T

and finally find "e" and "f" derived from "a" and "b" as:

ef = [c[0]d[0], c[1]d[1], c[2]d[2]] = [T[0], T[1], T[2]] = T

Cheers

ltp 11 years, 10 months ago  # | flag

Please don't be offended if my comments are a little too harsh. I am not criticizing you, only this little tiny piece of your work. And I hope you will see the simplicity of matrix notation even in python and that it will give you the motivation to use it in the future.

J W J (author) 11 years, 10 months ago  # | flag

I think one of us misunderstands the original problem. Let me explain my understanding in more detail: The equation shown on the wiki page is in tensor notation, which means there is more going on there than meets the eye. I found a clearer version of the equation that is written similar to this: (Let S stand for uppercase sigma(summation))

T[i,j,k] = Sj(Sk( A[j,i]*B[k,4] - A[j,4]*B[k,i] ))

Where Sj is the summation of zero up to j, Sk likewise from zero up to the k being calculated.

So T[0,0,0], the first entry, is just A[0,0]B[0,4] - A[0,4]B[0,0]

T[0,1,0] = A[0,0]*B[0,4] - A[0,4]*B[0,0] + A[1,0]*B[0,4] - A[1,4]*B[0,0]
T[0,2,1] = A[0,0]*B[0,4] - A[0,4]*B[0,0] + A[1,0]*B[0,4] - A[1,4]*B[0,0] + A[2,0]*B[0,4] - A[2,4]*B[0,0] 
          + A[0,0]*B[1,4] - A[0,4]*B[1,0] + A[1,0]*B[1,4] - A[1,4]*B[1,0] + A[2,0]*B[1,4] - A[2,4]*B[1,0]

And so forth. This is how I interpreted the tensor equation and why I have j only going up to the tj being calculated in my code and why I use "+=".

ltp 11 years, 10 months ago  # | flag

Well well well, on problem after the other. Most important first:

== The Algorithm ==

This:

T[i,j,k] = Sj(Sk( A[j,i]*B[k,4] - A[j,4]*B[k,i] ))

is by no way that:

T[0,1,0] = A[0,0]*B[0,4] - A[0,4]*B[0,0] + A[1,0]*B[0,4] - A[1,4]*B[0,0]
T[0,2,1] = A[0,0]*B[0,4] - A[0,4]*B[0,0] + A[1,0]*B[0,4] - A[1,4]*B[0,0] + A[2,0]*B[0,4] - A[2,4]*B[0,0] 
    + A[0,0]*B[1,4] - A[0,4]*B[1,0] + A[1,0]*B[1,4] - A[1,4]*B[1,0] + A[2,0]*B[1,4] - A[2,4]*B[1,0]

Please do it on a piece of paper. This (just set i equal to 0 to obtain T1):

T[0,j,k] = Sj(Sk( A[j,0]*B[k,3] - A[j,3]*B[k,0]))

Corresponds to this code in python:

T1 = zeros((3,3))
for j in [0, 1, 2]:
    for k in [0, 1, 2]:
        T1[j,k] = A[j,0]*B[k,3] - A[j,3]*B[k,0]

You just have to add a loop rolling on "i" to obtain T2 and T3.

"Sum" expression in the mathematical world IS exactly one uniq "for" loop in python

== Matrix notation ==

In compiled languages' world (such as C or FORTRAN), it is perfectly fine to write:

T1 = zeros((3,3))
for j in [0, 1, 2]:
    for k in [0, 1, 2]:
        T1[j,k] = A[j,0]*B[k,3] - A[j,3]*B[k,0]

You'll obtain something fast and efficient.

However, in python's world, this is highly inefficient. You could either use Cython or Numpy. You chose Numpy (which is a very good choice). Bu then, use matrix notation. It is much more powerful, shorter and less error prone.

ltp 11 years, 10 months ago  # | flag

By the way, that last algorithm gives the same results as my matrix notation in my first answer. This is obviously expected: your notation (with the sums) is the unfolded matrix notation. That notation is the strict mathematical equivalent to the matrix notation.

I understand that you might not be comfortable with matrices, but trust me when I say that matrices exists to simplify your life. See, you made a mistake and write in 12 lines what can be writen in 5 lines and runs 6 times faster :)

ltp 11 years, 10 months ago  # | flag

Ok, I reread the thread from the begining.

The natural way to write a tensor is to use the matrix notation because tensors and matrix share the same basic operations (for instance, applying a tensor to a set of coordinates is equivalent to a matrix multiplication of the tensor with the set of coordinates).

A matrix multiplication can, equivalentely, be written as a sum of a sum (along lines and columns actually). This is the notation that you find simpler. However, you misunderstood it: a sum (sigma) sign is to be undetstood as a simple "for" loop. Nested sums should be understoods as nested "for" loops.

You do something really strange actually in the first version of your program: you make it flow through 5 loops. But you don't have 5 sums in the original mathematical expression...

Even worse, the sum (unfolded) notation is not efficient in python. And numpy does all the unfolding work for you at nearly C speed. Therefore, I would really recommend to use matrix notation which does all this work internally.

Just one thing you have to pay attention to : if you use python's multiply character, make sure the input data are numpy matrices (as in mat(A)*mat(B)). If you use typical numpy arrays, make sure you multiplies using "numpy.dot(A, B)". Numpy will take care of the rest.

If you need more informations, feel free to ask.

bryan_s 11 years, 10 months ago  # | flag

I might have found what I think is not right in what you wrote...

You wrote

T[i,j,k] = Σj( Σk( A[j,i]*B[k,4] - A[j,4]*B[k,i] ))

Actually, it would more be:

T[i,j,k] = Σj Σk  A[j,i]*B[k,3] - A[j,3]*B[k,i] #no need for parenthesis, the 4th column of the matrix is the 3rd in Python

And here is your main conceptual error: please tell me who gave this equation to you because it is not quite right I am afraid. Someone mixed up thing! :)

The real equation (from the matrix notation given by ltp, and my courses at the Uni) is:

T[i, j, k] = A[j,i]*B[k,3] - A[j,3]*B[k,i]

The sums were likely put there to tell you that you have to do all the combinations of i, j, k. I suspect this is a remaining of a more compact notation.

Thus, in Python (and if you don't want to use Numpy):

def tensor(A, B): 
    A = array(A).reshape(3,4) # make sure you have a matrix, not a flat array.
    B = array(A).reshape(3,4)

    T = zeros((3,3))
    for i in range(3):
        for j in range(3):
            for k in range(3):
                T[i,j,k] = A[j,i]*B[k,3] - A[j,3]*B[k,i]

Can you understand the mistake? A sum is not necessary a for loop and a for loop is not necessary a sum (@ltp). Using the Numpy matrix notation ltp implemented give me the same results (and by the way, ltp implementation is right).

If you don't agree, please tell me but I assure you that these sums actually represent the matrix multiplication between A and B that have been mixed up in your notation. Hope this will help you.

@ltp: This guy was right all along with what was given to him. Please explain a bit more instead of repeating things: that won't help.

bryan_s 11 years, 10 months ago  # | flag

Oh, and a little something to finish to convince you (I hope :D).

If you take my equation:

T[i, j, k] = A[j,i]*B[k,3] - A[j,3]*B[k,i]

you end up with one scalar to put in one matrix field.

The function you wrote is not

T[i,j,k] = Σj Σk  A[j,i]*B[k,3] - A[j,3]*B[k,i]

it is

T[i,tj,tk] = Σ(j=0->tj) Σ(k=0->tk)  A[j,i]*B[k,3] - A[j,3]*B[k,i]

which was given nowhere. You had to made up tj and tk to give sense to your equation (I guess so since it was not written).

J W J (author) 11 years, 9 months ago  # | flag

Thanks for the response Bryan. This was the first time I've looked at 'tensor notation' and I guess I expected it to be something more complicated than it actually is. Contrary to what ltp assumed, I'm very familiar with numpy array operations, I just didn't know how to interpret the original equation.

Created by J W J on Thu, 21 Jun 2012 (MIT)
Python recipes (4591)
J W J's recipes (6)

Required Modules

Other Information and Tasks