Skip to content

About whitenlearn(X, qidxs, pidxs) #74

@michaeltian108

Description

@michaeltian108

Hi, I've been carefully reading your paper, your codes and your major references as well, all of which I found to be awesome, clear and systematic. Really enjoyed. However, I'm a bit confused when I came to read the part of the codes doing learnable whitening.

    m = X[:, qidxs].mean(axis=1, keepdims=True)
    df = X[:, qidxs] - X[:, pidxs]
    S = np.dot(df, df.T) / df.shape[1] 
    P = np.linalg.inv(cholesky(S))
    df = np.dot(P, X-m)
    D = np.dot(df, df.T)
    eigval, eigvec = np.linalg.eig(D)
    order = eigval.argsort()[::-1]
    eigval = eigval[order]
    eigvec = eigvec[:, order]
 
   P = np.dot(eigvec.T, P)

First of all, do S and D refer to Cs and Cd as you mentioned in your paper?
Everything makes sense to me until line 4 where a cholesky decomposition function is called. And I started to lose track of what exactly do P and D refer to. And I cannot understand how the P in the last line is the learned projection matrix.

I referred to the original paper and googled cholesky decomp for more details but that didn't help a lot.
Could you please simply explain what each line does from line 4?
Thank you very much for your time!
@filipradenovic

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions