# Tensor Methods for Nonlinear Matrix Completion

@article{Ongie2021TensorMF, title={Tensor Methods for Nonlinear Matrix Completion}, author={Greg Ongie and Laura Balzano and Daniel L. Pimentel-Alarc{\'o}n and Rebecca M. Willett and Robert D. Nowak}, journal={SIAM J. Math. Data Sci.}, year={2021}, volume={3}, pages={253-279} }

In the low-rank matrix completion (LRMC) problem, the low-rank assumption means that the columns (or rows) of the matrix to be completed are points on a low-dimensional linear algebraic variety. This paper extends this thinking to cases where the columns are points on a low-dimensional nonlinear algebraic variety, a problem we call Low Algebraic Dimension Matrix Completion (LADMC). Matrices whose columns belong to a union of subspaces are an important special case. We propose a LADMC algorithm… Expand

#### Figures, Tables, and Topics from this paper

#### 9 Citations

Local low-rank approach to nonlinear matrix completion

- Computer Science
- EURASIP J. Adv. Signal Process.
- 2021

This paper proposes a new method for obtaining the solution by minimizing the rank of the submatrix without transforming the target matrix, so as to obtain high estimation accuracy even when the number of columns is small. Expand

Local low-rank approach to nonlinear-matrix completion

- Computer Science
- 2021

This paper proposes a new method for obtaining the solution by minimizing the rank of the submatrix without transforming the target matrix, so as to obtain high estimation accuracy even when the number of columns is small. Expand

Nonlinear matrix recovery using optimization on the Grassmann manifold

- Computer Science, Mathematics
- ArXiv
- 2021

This work investigates the problem of recovering a partially observed high-rank matrix whose columns obey a nonlinear structure such as a union of subspaces, an algebraic variety or grouped in clusters and proposes two sets of algorithms, one arising from Riemannian optimization and the other as an alternating minimization scheme. Expand

Polynomial Matrix Completion for Missing Data Imputation and Transductive Learning

- Computer Science, Mathematics
- AAAI
- 2020

It is shown that the complete matrix of minimum intrinsic dimension can be identified by minimizing the rank of the matrix in a high dimensional feature space by developing a new formulation using the kernel trick together with a new relaxation of the rank objective. Expand

Classifying and Comparing Approaches to Subspace Clustering with Missing Data

- Computer Science
- 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
- 2019

This work organizes the existing methods for subspace clustering with missing data into five distinct families and discusses their relative strengths and weaknesses, and exposes some gaps in the current literature. Expand

Unsigned Matrix Completion

- Computer Science
- 2021 IEEE International Symposium on Information Theory (ISIT)
- 2021

Under an algebraic geometry framework, this work provides fundamental results regarding unique recovery up to a class of rank-preserving sign patterns and finite completability in the problem of unsigned matrix retrieval. Expand

Non-intrusive reduced-order models for parametric partial differential equations via data-driven operator inference

- Computer Science, Mathematics
- ArXiv
- 2021

This work formulates a new approach to reduced modeling of parameterized, timedependent partial differential equations (PDEs) using Operator Inference, a scientific machine learning framework combining data-driven learning and physics-based modeling that can be solved rapidly to map parameter values to approximate PDE solutions. Expand

Efficient Map Prediction via Low-Rank Matrix Completion

- Computer Science
- 2021 IEEE International Conference on Robotics and Automation (ICRA)
- 2021

It is demonstrated that with the proposed real-time map prediction framework, the coverage convergence rate (per action step) for a set of representative coverage planning methods commonly used for environmental modeling and monitoring tasks can be significantly improved. Expand

Learning from Comparisons and Choices

- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2018

A convex relaxation is proposed for learning the MultiNomial Logit (MNL) model, and it is shown that it is minimax optimal up to a logarithmic factor by comparing its performance to a fundamental lower bound. Expand

#### References

SHOWING 1-10 OF 89 REFERENCES

Low algebraic dimension matrix completion

- Computer Science
- 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
- 2017

A new LADMC algorithm is proposed that leverages existing LRMC methods on a tensorized representation of the data and can succeed in many cases where traditional LRMC is guaranteed to fail. Expand

Algebraic Variety Models for High-Rank Matrix Completion

- Mathematics, Computer Science
- ICML
- 2017

This work considers a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i.e. each data point is a solution to a system of polynomial equations, and proposes an efficient matrix completion algorithm that minimizes a convex or non-convex surrogate of the rank of the matrix of monomial features. Expand

Tensor completion and low-n-rank tensor recovery via convex optimization

- Mathematics
- 2011

In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery… Expand

High-Rank Matrix Completion and Subspace Clustering with Missing Data

- Mathematics, Computer Science
- ArXiv
- 2011

Under mild assumptions each column of X can be perfectly recovered with high probability from an incomplete version so long as at least CrNlog^2(n) entries of X are observed uniformly at random, with C>1 a constant depending on the usual incoherence conditions. Expand

Online High Rank Matrix Completion

- Computer Science, Mathematics
- 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019

A new model for high rank matrix completion (HRMC), together with batch and online methods to fit the model and out-of-sample extension to complete new data, and enjoys much lower space and time complexity than previous methods for HRMC. Expand

Polynomial Matrix Completion for Missing Data Imputation and Transductive Learning

- Computer Science, Mathematics
- AAAI
- 2020

It is shown that the complete matrix of minimum intrinsic dimension can be identified by minimizing the rank of the matrix in a high dimensional feature space by developing a new formulation using the kernel trick together with a new relaxation of the rank objective. Expand

A converse to low-rank matrix completion

- Mathematics, Computer Science
- 2016 IEEE International Symposium on Information Theory (ISIT)
- 2016

Conditions on X (genericity) and a deterministic condition on Ω to guarantee that if there is a rank-r matrix that agrees with the observed entries, then X is indeedRank-r, and this condition is satisfied with high probability under uniform random sampling schemes with only O(max{r, log d}) samples per column. Expand

Guaranteed Rank Minimization via Singular Value Projection

- Computer Science, Mathematics
- NIPS
- 2010

Results show that the SVP-Newton method is significantly robust to noise and performs impressively on a more realistic power-law sampling scheme for the matrix completion problem. Expand

High-Rank Matrix Completion and Clustering under Self-Expressive Models

- Computer Science, Mathematics
- NIPS
- 2016

This work proposes efficient algorithms for simultaneous clustering and completion of incomplete high-dimensional data that lie in a union of low-dimensional subspaces and shows that when the data matrix is low-rank, the algorithm performs on par with or better than low-Rank matrix completion methods, while for high-rank data matrices, the method significantly outperforms existing algorithms. Expand

On the Best Rank-1 Approximation of Higher-Order Supersymmetric Tensors

- Mathematics, Computer Science
- SIAM J. Matrix Anal. Appl.
- 2002

It is shown that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications. Expand