-
Notifications
You must be signed in to change notification settings - Fork 321
Description
Hi, I've been carefully reading your paper, your codes and your major references as well, all of which I found to be awesome, clear and systematic. Really enjoyed. However, I'm a bit confused when I came to read the part of the codes doing learnable whitening.
m = X[:, qidxs].mean(axis=1, keepdims=True)
df = X[:, qidxs] - X[:, pidxs]
S = np.dot(df, df.T) / df.shape[1]
P = np.linalg.inv(cholesky(S))
df = np.dot(P, X-m)
D = np.dot(df, df.T)
eigval, eigvec = np.linalg.eig(D)
order = eigval.argsort()[::-1]
eigval = eigval[order]
eigvec = eigvec[:, order]
P = np.dot(eigvec.T, P)
First of all, do S and D refer to Cs and Cd as you mentioned in your paper?
Everything makes sense to me until line 4 where a cholesky decomposition function is called. And I started to lose track of what exactly do P and D refer to. And I cannot understand how the P in the last line is the learned projection matrix.
I referred to the original paper and googled cholesky decomp for more details but that didn't help a lot.
Could you please simply explain what each line does from line 4?
Thank you very much for your time!
@filipradenovic