Tuesday, September 8, 2015

Second Order Collaborative Filtering: Playing with latent feature dimension

So, after playing around with things, I find -- unsurprisingly, since the winner of the Netflix contest used a very high-dimensional representation of feature vectors $\vec{x} \in \mathcal{R}^D$, $D \approx 1000$ -- that increasing the dimension of the feature vectors improves training fit substantially. Even with a fairly high regularization parameter of $\lambda = 1$ from the last post, I get the following results for $D=200$:

As you can see, we get a much tighter regression fit on the given ratings matrix, $Y_{ij}$ at the cost of extra computation. Inverting the Hessian of the Cost function -- which turns out to be only $D \times D$, thank goodness, due to diagonality in other degrees of freedom -- takes a great deal of time for high dimension $D$, so we are left with a trade off between goodness of fit and computation time.

This algorithm has been a second order "batch" gradient descent, taking in all the data at once. It will be interesting to see how things can be made incremental, or "online", so that data is taken in bit by bit, and our matrices $\mathbf{X}_{il}$, $\mathbf{\theta}_{jl}$ are updated bit by bit.

No comments:

Post a Comment