Linear algebra for tensor problems

I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov

Research output: Contribution to journalArticle

Abstract

By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters-by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.

Original languageEnglish
Pages (from-to)169-188
Number of pages20
JournalComputing (Vienna/New York)
Volume85
Issue number3
DOIs
Publication statusPublished - 1 Jul 2009

Fingerprint

Linear algebra
Tensors
Tensor
LAPACK
Representation of data
Matrix Inversion
Approximation
Three-dimensional
Output

Keywords

  • Data compression
  • Data-sparse methods
  • Dimensionality reduction
  • Large-scale matrices
  • Low rank approximations
  • Multidimensional arrays
  • Skeleton decompositions
  • Tensor approximations
  • Tucker decomposition

Cite this

Oseledets, I. V. ; Savostyanov, D. V. ; Tyrtyshnikov, E. E. / Linear algebra for tensor problems. In: Computing (Vienna/New York). 2009 ; Vol. 85, No. 3. pp. 169-188.
@article{3abf5a9d5bcb43cba42a4c11225f976c,
title = "Linear algebra for tensor problems",
abstract = "By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters-by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.",
keywords = "Data compression, Data-sparse methods, Dimensionality reduction, Large-scale matrices, Low rank approximations, Multidimensional arrays, Skeleton decompositions, Tensor approximations, Tucker decomposition",
author = "Oseledets, {I. V.} and Savostyanov, {D. V.} and Tyrtyshnikov, {E. E.}",
year = "2009",
month = "7",
day = "1",
doi = "10.1007/s00607-009-0047-6",
language = "English",
volume = "85",
pages = "169--188",
journal = "Computing (Vienna/New York)",
issn = "0010-485X",
number = "3",

}

Linear algebra for tensor problems. / Oseledets, I. V.; Savostyanov, D. V.; Tyrtyshnikov, E. E.

In: Computing (Vienna/New York), Vol. 85, No. 3, 01.07.2009, p. 169-188.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Linear algebra for tensor problems

AU - Oseledets, I. V.

AU - Savostyanov, D. V.

AU - Tyrtyshnikov, E. E.

PY - 2009/7/1

Y1 - 2009/7/1

N2 - By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters-by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.

AB - By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters-by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.

KW - Data compression

KW - Data-sparse methods

KW - Dimensionality reduction

KW - Large-scale matrices

KW - Low rank approximations

KW - Multidimensional arrays

KW - Skeleton decompositions

KW - Tensor approximations

KW - Tucker decomposition

UR - http://www.scopus.com/inward/record.url?scp=68149178489&partnerID=8YFLogxK

U2 - 10.1007/s00607-009-0047-6

DO - 10.1007/s00607-009-0047-6

M3 - Article

AN - SCOPUS:68149178489

VL - 85

SP - 169

EP - 188

JO - Computing (Vienna/New York)

JF - Computing (Vienna/New York)

SN - 0010-485X

IS - 3

ER -