Abstract
By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters-by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.
Original language | English |
---|---|
Pages (from-to) | 169-188 |
Number of pages | 20 |
Journal | Computing (Vienna/New York) |
Volume | 85 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Jul 2009 |
Keywords
- Data compression
- Data-sparse methods
- Dimensionality reduction
- Large-scale matrices
- Low rank approximations
- Multidimensional arrays
- Skeleton decompositions
- Tensor approximations
- Tucker decomposition