Next Article in Journal
Practical Performance Analyses of 5G Sharing Voice Solution
Next Article in Special Issue
An Overview of Systolic Arrays for Forward and Inverse Discrete Sine Transforms and Their Exploitation in View of an Improved Approach
Previous Article in Journal
FPGA-Based Hardware Accelerator on Portable Equipment for EEG Signal Patterns Recognition
Previous Article in Special Issue
On the Derivation of Winograd-Type DFT Algorithms for Input Sequences Whose Length Is a Power of Two
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Small-Size Algorithms for the Type-I Discrete Cosine Transform with Reduced Complexity

by
Miłosz Kolenderski
and
Aleksandr Cariow
*
Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, Żołnierska 52, 71-210 Szczecin, Poland
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(15), 2411; https://doi.org/10.3390/electronics11152411
Submission received: 22 June 2022 / Revised: 23 July 2022 / Accepted: 26 July 2022 / Published: 2 August 2022
(This article belongs to the Special Issue Efficient Algorithms and Architectures for DSP Applications)

Abstract

:
Discrete cosine transforms (DCTs) are widely used in intelligent electronic systems for data storage, processing, and transmission. The popularity of using these transformations, on the one hand, is explained by their unique properties and, on the other hand, by the availability of fast algorithms that minimize the computational and hardware complexity of their implementation. The type-I DCT has so far been perhaps the least popular, and there have been practically no publications on fast algorithms for its implementation. However, at present the situation has changed; therefore, the development of effective methods for implementing this type of DCT becomes an urgent task. This article proposes several algorithmic solutions for implementing type-I DCTs. A set of type-I DCT algorithms for small lengths N = 2 , 3 , 4 , 5 , 6 , 7 , 8 is presented. The effectiveness of the proposed solutions is due to the possibility of fortunate factorization of the small-size DCT-I matrices, which reduces the complexity of implementing transformations of this type.

1. Introduction

Discrete cosine transform (DCT) [1,2,3,4,5,6] is widely used in many radio-electronic and telecommunication systems for data processing and transmission, including digital signal and image processing [7,8,9], radar imaging [10], digital watermarking [11,12], analysis of hyperspectral data [13,14], video compression [15,16,17,18,19,20,21,22], etc. In fact, there are eight different types of DCTs [4,5,6]. In the DCT arsenal, the Type I Discrete Cosine Transform (DCT-I) is one of the less popular ones. However, recently it has been increasingly used in wireless communication systems in order to modernize the multicarrier modulation and channel estimation techniques for Long Term Evolution (LTE) [23,24,25,26,27]. Since, like other types of orthogonal transformations, the implementation of the DCT-I transformation requires a lot of time, the search for algorithmic solutions that can reduce this time is an urgent task. The reduction of the number of arithmetic operations is provided by the so-called fast algorithms. Unfortunately, there are undeservedly few articles devoted to fast algorithms for calculating DCT-I. With rare exceptions, most publications known to the authors mainly deal with fast algorithms for other types of DCT. It should be noted that, as in the case of other discrete orthogonal transformations [28,29,30], the DCT-I algorithms for short sequences are also of particular interest. In the case of hardware or software implementation of digital signal processing methods, small-sized DCT-I implementation cores can serve as building blocks for the synthesis of larger-size algorithms [4,7,31,32,33]. Despite this, there is practically no information about DCT-I algorithms for short-length sequences in the publications available to the authors. To eliminate these shortcomings, fast DCT-I algorithms for input sequences of length N = 2 , 3 , 4 , 5 , 6 , 7 , 8 are described in detail. This article continues the series of publications related to the development of small-sized algorithms for fast orthogonal transforms [28,29,30].

2. Preliminary Remarks

The DCT-I transform is given by the following equation [3,4,5]:
c k = 2 N 1 n = 0 N 1 x n ε n ε k cos π n k N 1 ,
ε n , ε k = 1 2 n , k = 0 , 1 2 n , k = N 1 , 1 otherwise ,
k , n = 0 , 1 , , N 1 ,
where { x n } is an input data sequence and { c n } is a sequence of DCT coefficients. In matrix-vector notation the pair of FTCT/IDCT transforms can be represented as:
Y N × 1 = C N X N × 1 , X N × 1 = C N T Y N × 1 ,
where C N = c k , n is ( N × N ) discrete cosine transform matrix, X N × 1 = x 0 , x 1 , , x N 1 T and Y N × 1 = y 0 , y 1 , , y N 1 T are input and output data vectors, respectively. Symbol “ T ” denotes the matrix transpose operation, and
c k , n = 2 N 1 ε n ε k cos π n k N 1 .
In the case of DCT-I C N = C N T . Based on that general considerations, we can describe the entries of the DCT matrix in the following way:
C N = c 0 , 0 c 0 , 1 c 0 , N 1 c 1 , 0 c 1 , 1 c 1 , N 1 c N 1 , 0 c N 1 , 1 c N 1 , N 1 . .
The entries of this matrix are real numbers and their values depend on both the indexes k , n and the number N. However, it will be more convenient for us to denote the numerical values of the matrix C N entries by means of the letters of the ordinary Latin alphabet a N , b N , c N , , z N . In this case, the subscript N will indicate the size of the DCT matrix. This will simplify the identification of structural features of the matrix and the presence in it of compositions of the same values of the entries.

3. Small-Size Algorithms for the DCT-I

3.1. Algorithm for the 2-Point DCT-I

Let X 2 × 1 = x 0 , x 1 T and Y 2 × 1 = y 0 , y 1 T be 2-element input and output data vectors. The problem is to calculate the product:
Y 2 × 1 = C 2 X 2 × 1 ,
where
C 2 = a 2 a 2 a 2 a 2 , a 2 = 2 2 .
Direct computation of (4) requires four multiplications and two additions. Because every vector element needs to be multiplied by the same factor it is possible to perform the additions first and then perform the multiplications.
Knowing that, the rationalized computational procedure for computing the 2-point DCT-I can be described in the following form:
Y 2 × 1 = D 2 H 2 X 2 × 1 ,
where
H 2 = 1 1 1 1 , D 2 = diag ( s 0 ( 2 ) , s 1 ( 2 ) ) , s 0 ( 2 ) = s 1 ( 2 ) = a 2 .
As shown in (5), the 2-point DCT-I can be calculated using only two multiplications and two additions.
The same algorithm is represented as a data flow graph in Figure 1. In this paper all of the data flow graphs represent data flow from left to right. Straight lines denote operations of data transfer (data paths). Multiplications are shown as circles with a number inside denoting the factor by which the data should be multiplied. Points where multiple lines end denote summation nodes. Additionally dashed lines visualise data paths changing the sign of a number (these data paths multiply a number by a factor of −1). We use the usual lines without arrows on purpose, so as not to clutter the graphs.

3.2. Algorithm for the 3-Point DCT-I

Let X 3 × 1 = x 0 , x 1 , x 2 T and Y 3 × 1 = y 0 , y 1 , y 2 T be 3-element input and output data vectors. The 3-point DCT-I can be represented as:
Y 3 × 1 = C 3 X 3 × 1 ,
where
C 3 = a 3 b 3 a 3 b 3 0 b 3 a 3 b 3 a 3 , a 3 = 1 2 , b 3 = 2 2 .
The C 3 matrix can be described as a sum of two matrices:
Electronics 11 02411 i001
where
Electronics 11 02411 i002
The C 3 ( 1 ) matrix can be reduced to a 2 × 2 matrix.
C 3 ( 1 ) C 2 ( 1 ) = a 3 a 3 a 3 a 3 .
Multiplication by this matrix can be preformed using only one multiplication and one addition using the following formula:
Y 2 × 1 ( 1 ) = 1 2 × 1 a 3 1 1 × 2 X 2 × 1 ( 1 )
where
X 2 × 1 ( 1 ) = x 0 , x 2 T , 1 1 × 2 = 1 , 1 , 1 2 × 1 = 1 , 1 T , Y 2 × 1 ( 1 ) = y 0 , y 2 T .
It is also possible to reduce the number of multiplications in the C 3 ( 2 ) matrix. In this case the addition from the second row can be performed before multiplication. So, x 1 can be multiplied by b 3 first and then it can be replicated while changing the sign of the replicated number to reproduce row numbers 1 and 3.
Taking into account the transformations made, the rationalized computational procedure for the 3-point DCT-I can be written in the following form:
Y 3 × 1 = W 3 ( 2 ) D 3 W 3 ( 1 ) X 3 × 1 ,
where
W 3 ( 1 ) = 1 0 1 0 1 0 1 0 1 , D 3 = diag ( s 0 ( 3 ) , s 1 ( 3 ) , s 2 ( 3 ) ) , W 3 ( 2 ) = 1 1 0 0 0 1 1 1 0 , s 0 ( 3 ) = a 3 , s 1 ( 3 ) = s 2 ( 3 ) = b 3 .
As you can see, in this and some other cases, the developed algorithms contain multiplications by 1 2 . This operation is reduced to the usual shift to the right by one position. Due to the ease of implementation, these operations are usually not taken into account when estimating computational complexity. Therefore, the 3-point DCT-I can be calculated using only two multiplications and four additions. Figure 2 represents this algorithm in the form of a data flow graph.

3.3. Algorithm for the 4-Point DCT-I

Let X 4 × 1 = x 0 , x 1 , x 2 , x 3 T and Y 4 × 1 = y 0 , y 1 , y 2 , y 3 T be 2-element input and output data vectors. The 4-point DCT-I can be represented as:
Y 4 × 1 = C 4 X 4 × 1 ,
where
C 4 = a 4 b 4 b 4 a 4 b 4 a 4 a 4 b 4 b 4 a 4 a 4 b 4 a 4 b 4 b 4 a 4 , a 4 = 6 6 , b 4 = 3 3 .
In the C 4 the optimized version of the algorithm is not visible at a first glance. What we can do is change the order of columns and rows of the matrix while also permuting the corresponding elements in the input and output vectors. We chose to swap rows with number 1 and number 3 and also columns with numbers 1 and 3. As a result, we get the following matrix:
C ˜ 4 = a 4 a 4 b 4 b 4 a 4 a 4 b 4 b 4 b 4 b 4 a 4 a 4 b 4 b 4 a 4 a 4 = A 2 ( 1 ) B 2 ( 1 ) B 2 ( 1 ) A 2 ( 1 ) .
Because of the structure it can be computed using the following procedure [34]:
C ˜ 4 = ( T 2 × 3 ( 1 ) I 2 ) [ ( A 2 ( 1 ) B 2 ( 1 ) ) ( A 2 ( 1 ) B 2 ( 1 ) ) B 2 ( 1 ) ] ( T 3 × 2 ( 1 ) I 2 ) ,
where
T 3 × 2 = 1 0 0 1 1 1 , T 2 × 3 = 1 0 1 0 1 1 .
In this article the “⊗” and “⊕” symbols are used to represent the Kronecker product and direct sum of two matrices respectively [35,36]. Such factorization allows us to reduce the number of the multiplications by a factor of 3 4 . Both A 2 ( 1 ) and B 2 ( 1 ) share similar structures, which is a Hadamard matrix of order two multiplied by a scalar. Because of that it is possible to further reduce the number of multiplications times two by first using Hadamard matrix and later multiplying by proper scalars.
Knowing all of that, the rationalized computational procedure for computing the 4-point DCT-I can be described in the following form:
Y 4 × 1 = P 4 A 4 × 6 D 6 W 6 A 6 × 4 P 4 X 4 × 1
where
P 4 = 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 , A 6 × 4 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 1 0 0 1 0 1 , W 6 = I 3 H 2 , D 6 = diag ( s 0 ( 4 ) , s 1 ( 4 ) , s 2 ( 4 ) , , s 5 ( 4 ) ) ,
s 0 ( 4 ) = s 1 ( 4 ) = a 4 b 4 , s 2 ( 4 ) = s 3 ( 4 ) = a 4 b 4 , s 4 ( 4 ) = s 5 ( 4 ) = b 4 , A 4 × 6 = 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 1 .
As shown in Figure 3, the 4-point DCT-I can be computed using only six multiplications and 12 additions.

3.4. Algorithm for the 5-Point DCT-I

Let X 5 × 1 = x 0 , x 1 , x 2 , x 3 , x 4 T and Y 5 × 1 = y 0 , y 1 , y 2 , y 3 , y 4 T be 5-element input and output data vectors. The 5-point DCT-I can be represented as:
Y 5 × 1 = C 5 X 5 × 1 ,
where
C 5 = a 5 b 5 b 5 b 5 a 5 b 5 b 5 0 b 5 b 5 b 5 0 c 5 0 b 5 b 5 b 5 0 b 5 b 5 a 5 b 5 b 5 b 5 a 5 , a 5 = 2 4 , b 5 = 1 2 , c 5 = 2 2 .
In this matrix it is also worth changing the order of columns and rows and reordering them according to vector elements. It is also easier to fit one of the patterns after changing some of the signs. After swapping column 2 with column 5, row 1 with row 4 and inverting signs of x 3 and y 3 the matrix looks the following way:
Electronics 11 02411 i003
This matrix can be split into three matrices for applying corresponding rationalized procedures:
Electronics 11 02411 i004
For the A 2 ( 2 ) and A 2 ( 2 ) matrices there are already optimised formulas. The A 3 × 5 requires an individual approach. The left part of this matrix can be reduced to a single addition and two multiplications. The first step is to add x 1 and x 2 . Then the same value can be used twice and multiplied by b 5 and a 5 and added to corresponding rows. In this matrix it is worth calculating the multiplications from column 3 separately. The right part containing A 2 ( 2 ) matrix can also be computed using already existing procedures.
After applying all of this, the rationalized computational procedure for computing the 5-point DCT-I can be described in the following form:
Y 5 × 1 = P 5 ( 2 ) W 5 ( 2 ) A 5 × 7 D 7 M 7 × 5 W 5 ( 1 ) P 5 ( 1 ) X 5 × 1 ,
where
P 5 ( 1 ) = 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 , W 5 ( 1 ) = 1 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 0 0 0 0 0 1 1 , M 7 × 5 = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 ,
D 7 = diag ( s 0 ( 5 ) , s 1 ( 5 ) , s 2 ( 5 ) , , s 6 ( 5 ) ) , s 0 ( 5 ) = s 1 ( 5 ) = s 2 ( 5 ) = s 5 ( 5 ) = s 6 ( 5 ) = b 5 = 1 2 , s 3 = a 5 , s 4 = c 5 ,
A 5 × 7 = 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 , W 5 ( 2 ) = I 3 H 2 , P 5 ( 2 ) = 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 .
As shown in (11) the 5-point DCT-I can be calculated using only two multiplications and 10 additions. Figure 4 represents this algorithm in the form of a data flow graph.

3.5. Algorithm for the 6-Point DCT-I

Let X 6 × 1 = x 0 , x 1 , x 2 , x 3 , x 4 , x 5 T and Y 6 × 1 = y 0 , y 1 , y 2 , y 3 , y 4 , y 5 T be 6-element input and output data vectors. The 6-point DCT-I can be represented as:
Y 6 × 1 = C 6 X 6 × 1 ,
where
C 6 = a 6 b 6 b 6 b 6 b 6 a 6 b 6 c 6 d 6 d 6 c 6 b 6 b 6 d 6 c 6 c 6 d 6 b 6 b 6 d 6 c 6 c 6 d 6 b 6 b 6 c 6 d 6 d 6 c 6 b 6 a 6 b 6 b 6 b 6 b 6 a 6 , a 6 = 10 10 , b 6 = 5 5 ,
c 6 = 10 5 cos π 5 0.51167 , d 6 = 10 5 cos 2 π 5 0.19544 .
Before trying to find any way to optimize, it is worth changing the order of columns and rows. At first, we begin by swapping columns 2 with 6, 4 with 5, and rows 2 with 6 and 4 with 5. After this operation the matrix looks the following way:
C 6 ( 1 ) = a 6 a 6 b 6 b 6 b 6 b 6 a 6 a 6 b 6 b 6 b 6 b 6 b 6 b 6 c 6 d 6 c 6 d 6 b 6 b 6 d 6 c 6 d 6 c 6 b 6 b 6 c 6 d 6 c 6 d 6 b 6 b 6 d 6 c 6 d 6 c 6 = A 2 ( 3 ) A 2 × 4 A 4 × 2 A 4 ( 1 ) .
Similar to algorithms for N = 2 , 3 , 4 , 5 , the A 2 ( 3 ) matrix can be calculated by first performing the additions (multiplying by an order 2 Hadamard matrix) and later performing only two multiplications. The A 2 × 4 matrix can be optimized by first calculating x 2 + x 3 and x 4 + x 5 , multiplying both expressions by b 6 , and as the matrix pattern suggests an order 2 Hadamard matrix to put everything together. The A 4 × 2 matrix consists of two parts. Both of these halves can be reduced to singular multiplications by first computing x 1 + x 2 and x 1 x 2 respectively and multiplying the results by b 6 . These additions are not required to be calculated, because the additions from A 2 ( 3 ) can be reused. The A 4 ( 1 ) is more complex than previous matrices. In this case, the matrix has the following structure:
A 4 ( 1 ) = c 6 d 6 c 6 d 6 d 6 c 6 d 6 c 6 c 6 d 6 c 6 d 6 d 6 c 6 d 6 c 6 , = A 2 ( 4 ) A 2 ( 4 ) A 2 ( 4 ) A 2 ( 4 ) .
It is noticeable that this structure makes it possible to reduce the number of multiplications at least by a factor of 2. The multiplications can be performed for a single vertical half of matrix and these values can be reused in the second half. The right half requires the invertion of the signs of the results when reusing the results. In conclusion, only two matrix multiplications by A 2 ( 4 ) are required instead of four. It is also possible to reduce the number of multiplications in a single A 2 ( 4 ) matrix. To do so we can apply one of the templates of the matrix structures [34]. In this case the procedure for A 2 ( 4 ) would have the following form:
Y 2 × 1 = H 2 1 2 diag ( c 6 + d 6 , c 6 d 6 ) H 2 X 2 × 1 .
Knowing all of that, the rationalized computational procedure for computing the 6-point DCT-I can be described in the following form:
Y 6 × 1 = P 6 A 6 × 10 W 10 ( 2 ) W 10 ( 1 ) D 10 M 10 × 6 W 6 ( 1 ) P 6 X 6 × 1 ,
where
P 6 = 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 , W 6 ( 1 ) = I 3 H 2 , M 10 × 6 = 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1
D 10 = diag ( s 0 , s 1 , s 2 , , s 9 ) , s 0 = s 2 = a 6 , s 1 = s 3 = s 4 = s 5 = b 6 ,
s 6 = s 8 = c 6 + d 6 2 , s 7 = s 9 = c 6 d 6 2 , W 10 ( 1 ) = I 4 ( I 3 H 2 ) , W 10 ( 2 ) = I 6 ( H 2 I 2 ) ,
A 6 × 10 = 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 .
As shown in (14), the 6-point DCT-I can be calculated using only 10 multiplications and 22 additions. Figure 5 represents this algorithm in the form of a data flow graph.

3.6. Algorithm for the 7-Point DCT-I

Let X 7 × 1 = x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 T and Y 7 × 1 = y 0 , y 1 , y 2 , y 3 , y 4 , y 5 , y 6 T be 7-element input and output data vectors. The 7-point DCT-I can be represented as:
Y 7 × 1 = C 7 X 7 × 1 ,
where
C 7 = a 7 b 7 b 7 b 7 b 7 b 7 a 7 b 7 c 7 a 7 0 a 7 c 7 b 7 b 7 a 7 a 7 d 7 a 7 a 7 b 7 b 7 0 d 7 0 d 7 0 b 7 b 7 a 7 a 7 d 7 a 7 a 7 b 7 b 7 c 7 a 7 0 a 7 c 7 b 7 a 7 b 7 b 7 b 7 b 7 b 7 a 7 , a 7 = 3 6 , b 7 = 6 6 , c 7 = 1 2 , d 7 = 3 3 .
For better clarity we begin with changing the order of columns and rows in the C 7 matrix. In this case it is worth swapping columns 2 with 7, 4 with 5 and rows 2 with 7 and 4 with 5. This leaves us with the following matrix:
C 7 ( 1 ) = a 7 a 7 b 7 b 7 b 7 b 7 b 7 a 7 a 7 b 7 b 7 b 7 b 7 b 7 b 7 b 7 a 7 a 7 d 7 a 7 a 7 b 7 b 7 a 7 a 7 d 7 a 7 a 7 b 7 b 7 d 7 d 7 0 0 0 b 7 b 7 a 7 a 7 0 c 7 c 7 b 7 b 7 a 7 a 7 0 c 7 c 7 = A 4 A 4 × 3 ( 1 ) A 3 × 4 ( 1 ) A 3 ( 1 )   . .
Because the A 4 matrix has a following structure:
A 4 = a 7 a 7 b 7 b 7 a 7 a 7 b 7 b 7 b 7 b 7 a 7 a 7 b 7 b 7 a 7 a 7 = A 2 ( 5 ) B 2 B 2 A 2 ( 5 )   . .
it is possible to apply the following procedure:
Y 4 × 1 = ( T 2 × 3 I 2 ) [ ( A 2 ( 5 ) B 2 ) ( A 2 ( 5 ) B 2 ) B 2 ] ( T 3 × 2 I 2 ) X 4 × 1
where
T 3 × 2 = 1 0 0 1 1 1 , T 2 × 3 = 1 0 1 0 1 1 .
This way, the number of multiplications in A 4 is reduced to only 3 4 of the original multiplications. Additionally, matrices ( A 2 ( 5 ) B 2 ) , ( A 2 ( 5 ) B 2 ) and B 2 have identical structures and these matrices require only a single multiplication as shown in one of the previous procedures. This means that multiplication by the A 4 matrix can be reduced to only three multiplications.
The A 4 × 3 ( 1 ) matrix can be split in the following way:
Electronics 11 02411 i005
The upper half of this matrix can be calculated by adding all three of the input values and multiplying them once. The same result can be used for both of the rows by inverting the sign. The bottom half can be computed in a similar way, but the left part requires additional multiplication by d 7 . Because this part requires the addition of the second and third input elements and the upper part requires the addition of all three arguments it is worth separating additions in two steps. Therefore, the number of multiplications in A 4 × 3 can also be reduced to three multiplications.
The A 3 ( 1 ) matrix contains only zeros in the first column and the first row and can be reduced to a 2 × 2 matrix:
A 3 ( 1 ) A 2 ( 6 ) = c 7 c 7 c 7 c 7 .
Number of multiplications in this matrix can be reduced to a single multiplication by performing the additions first and by knowing that both rows of this matrix are the same, but with an inverted sign.
The last part of the C 7 matrix has the following structure:
A 3 × 4 = b 7 b 7 d 7 d 7 b 7 b 7 a 7 a 7 b 7 b 7 a 7 a 7 . .
The first step in this case is to calculate x 1 x 2 and x 3 x 4 . The left part of A 3 × 4 contains three identical rows so only a single multiplication of the first addition is required. In the right side it is important to note that d 7 = 2 a 7 and it is possible to calculate this part by multiplying x 3 x 4 only by a 7 . To calculate the first row we can use the same result, reverse the sign and use a bitwise shift.
Knowing all of that, the rationalized computational procedure for computing the 7-point DCT-I can be described in the following form:
Y 7 × 1 = P 7 A 7 A 7 × 10 M 10 × 9 D 9 A 9 W 9 A 9 × 7 P 7 X 7 × 1 ,
where
P 7 = 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 , A 9 × 7 = 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 ,
W 9 = 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 , A 9 = 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ,
D 9 = diag ( s 0 , s 1 , s 2 , , s 8 ) , s 0 = a 7 b 7 , s 1 = a 7 b 7 , s 2 = s 3 = s 7 = b 7 , s 4 = d 7 ,
s 5 = s 8 = a 7 , s 6 = c 7 , s 9 = 2 , M 10 × 9 = I 8 1 1 ,
A 7 × 10 = 1 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 1 0 , A 7 = 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 .
As shown in (16) the 7-point DCT-I can be calculated using only nine multiplications, 21 additions and a single bitwise shift. Figure 6 represents this algorithm in the form of a data flow graph.

3.7. Algorithm for the 8-Point DCT-I

Let X 8 × 1 = x 0 , x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 T and Y 8 × 1 = y 0 , y 1 , y 2 , y 3 , y 4 , y 5 , y 6 , y 7 T be 8-element input and output data vectors. The 8-point DCT-I can be represented as:
Y 8 × 1 = C 8 X 8 × 1 ,
where
C 8 = a 8 b 8 b 8 b 8 b 8 b 8 b 8 a 8 b 8 c 8 d 8 e 8 e 8 d 8 c 8 b 8 b 8 d 8 e 8 c 8 c 8 e 8 d 8 b 8 b 8 e 8 c 8 d 8 d 8 c 8 e 8 b 8 b 8 e 8 c 8 d 8 d 8 c 8 e 8 b 8 b 8 d 8 e 8 c 8 c 8 e 8 d 8 b 8 b 8 c 8 d 8 e 8 e 8 d 8 c 8 b 8 a 8 b 8 b 8 b 8 b 8 b 8 b 8 a 8 , a 8 = 14 14 , b 8 = 7 7 ,
c 8 = 2 7 cos π 7 0.481588 , d 8 = 2 7 cos 2 π 7 0.333269 , e 8 = 2 7 cos 3 π 7 0.118942 ,
The first step for finding the algorithm is to split the C 8 matrix in the following way:
Electronics 11 02411 i006
where
Electronics 11 02411 i007
Multiplication by A 8 ( 1 ) can be optimised by separating the multiplications from additions and performing them before multiplications. Simple techniques like factoring out parentheses provide good results in this case and it is possible to reduce a multiplication by A 8 ( 1 ) to only six multiplications.
The A 8 ( 2 ) matrix contains only zeros on its borders and can be reduced to a 6 × 6 matrix:
A 8 ( 2 ) A 6 = c 8 d 8 e 8 e 8 d 8 c 8 d 8 e 8 c 8 c 8 e 8 d 8 e 8 c 8 d 8 d 8 c 8 e 8 e 8 c 8 d 8 d 8 c 8 e 8 d 8 e 8 c 8 c 8 e 8 d 8 c 8 d 8 e 8 e 8 d 8 c 8 .
To find a way to reduce the number of multiplications in this matrix the first step is to change the order of columns in this matrix to: 1, 5, 3, 6, 2, 4 and the order of rows to: 3, 5, 1, 4, 2, 6 and invert the signs of the three last columns in the resulting matrix. After this operation, the matrix looks as follows:
A 6 ( 2 ) = e 8 c 8 d 8 e 8 c 8 d 8 d 8 e 8 c 8 d 8 e 8 c 8 c 8 d 8 e 8 c 8 d 8 e 8 e 8 c 8 d 8 e 8 c 8 d 8 d 8 e 8 c 8 d 8 e 8 c 8 c 8 d 8 e 8 c 8 d 8 e 8 .
The structure of this matrix can be described in the following way:
A 6 ( 2 ) = e 8 c 8 d 8 e 8 c 8 d 8 d 8 e 8 c 8 d 8 e 8 c 8 c 8 d 8 e 8 c 8 d 8 e 8 e 8 c 8 d 8 e 8 c 8 d 8 d 8 e 8 c 8 d 8 e 8 c 8 c 8 d 8 e 8 c 8 d 8 e 8 = A 3 ( 2 ) A 3 ( 2 ) B 3 B 3   . .
Because of that it is possible to apply the following formula [34]:
Y 6 × 1 = ( A 3 ( 2 ) B 3 ) ( H 2 I 3 ) X 6 × 1 .
This already reduces the number of multiplication by a factor of 2. The input vector is multiplied by two 3 × 3 matrices instead of a single 6 × 6 matrix. These smaller matrices share the same pattern and only all of the signs are inverted relative to the other matrix. This means that we can take the same approach for both of these matrices.
Because of the characteristic structure of A 3 ( 2 ) , multiplication by this matrix can be calculated using a three-point circular convolution [37] which has the following form for A 3 ( 2 ) :
A 3 ( 2 ) = A 3 ( 4 ) A 3 × 4 D 4 ( 1 ) A 4 × 3 ( 2 ) A 3 ( 3 ) ,
where
A 3 ( 3 ) = 1 1 1 1 0 1 0 1 1 , A 4 × 3 ( 2 ) = 1 0 0 0 1 0 0 0 1 0 1 1 , D 4 ( 1 ) = diag ( s 0 , s 1 , s 2 , s 3 ) , s 0 = c d + e 3 ,
s 1 = c + e , s 2 = c d , s 3 = 2 c d + e 3 , A 3 × 4 = 1 0 0 0 0 1 0 1 0 0 1 1 , A 3 ( 4 ) = 1 1 0 1 1 1 1 0 1 .
Knowing all of that, the rationalized computational procedure for computing the 8-point DCT-I can be described in the following form:
Y 8 × 1 = A 8 × 10 A 10 A 10 × 14 D 14 M 14 × 10 W 10 W 10 × 14 M 14 × 8 X 8 × 1 ,
where
M 14 × 8 = P 6 × 8 I 8 P 6 × 8 = 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 ,
W 10 × 14 = W 6 ( 2 ) W 4 × 8 , W 4 × 8 = 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 1 0 1 0 1 0 ,
W 6 ( 2 ) = 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 , W 10 = I 2 A 3 ( 3 ) I 4 , A 3 ( 3 ) = 1 1 1 1 0 1 0 1 1 ,
M 14 × 10 = I 2 A 4 × 3 ( 2 ) M 6 × 4 , M 6 × 4 = 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , A 4 × 3 ( 2 ) = 1 0 0 0 1 0 0 0 1 0 1 1 ,
D 14 = diag ( s 0 ( 8 ) , s 1 ( 8 ) , s 2 ( 8 ) , , s 13 ( 8 ) ) , s 0 ( 8 ) = c d + e 3 , s 1 ( 8 ) = c + e , s 2 ( 8 ) = c d ,
s 3 ( 8 ) = 2 c d + e 3 , s 4 ( 8 ) = s 0 ( 8 ) , s 5 ( 8 ) = s 1 ( 8 ) , s 6 ( 8 ) = s 2 ( 8 ) , s 7 ( 8 ) = s 3 ( 8 ) , s 8 ( 8 ) = s 11 ( 8 ) = a 8 ,
s 9 ( 8 ) = s 10 ( 8 ) = s 12 ( 8 ) = s 13 ( 8 ) = b 8 , A 10 × 14 = I 2 A 3 × 4 ( 2 ) A 4 × 6 , A 3 × 4 ( 2 ) = 1 0 0 0 0 1 0 1 0 0 1 1 ,
A 4 × 6 = 1 0 0 0 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 , A 10 = I 2 A 3 ( 4 ) I 4 , A 3 ( 4 ) = 1 1 0 1 1 1 1 0 1
A 8 × 10 = 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 .
As shown in (18) the 8-point DCT-I can be calculated using only 14 multiplications and 43 additions. Figure 7 represents this algorithm in the form of a data flow graph.

4. Computation Complexity

Despite the fact that we noted the number of arithmetic operations spent during the implementation for each algorithm separately, in this section we provide a summary table. Table 1 shows estimates of the number of arithmetic operations for short length DCT-I algorithms. The penultimate column of Table 1 shows the percentage reduction in the number of multiplications, while the last column shows the percentage reduction in the number of additions.

5. Conclusions

This article presents a set of small-size type I discrete cosine transform algorithms with a reduced number of multiplications. This fact suggests that with the correct hardware implementation of the developed algorithms in the form of full-fledged ASIC modules, these modules will take up less space and consume less energy. As a result, the entire system in which these modules will be used as building blocks will have minimal dimensions and low power consumption. This approach is especially important when dealing with battery-powered devices. While modern stationary data processing systems have sufficient processing power due to the parallelization of calculations, the process of designing battery-powered mobile airborne systems contains many conflicting factors that prevent maximum performance. The parallelization of computing, traditionally used to achieve high data processing speed, leads to an increase in hardware costs and, as a result, to an increase in the size, weight, and power consumption of the entire system. Therefore, we need solutions that, on the one hand, maximize the use of parallel computing, and on the other hand, minimize the hardware implementation costs. With proper implementation, the algorithms proposed in the article can provide high technical characteristics. In the future, we plan to expand the set of proposed algorithmic solutions, as well as implement and present the algorithms in the form of IP cores. These issues will be consistently reflected in the authors’ subsequent publications.

Author Contributions

Conceptualization, M.K. and A.C.; methodology, A.C.; software, M.K.; validation, M.K. and A.C.; formal analysis, M.K. and A.C.; investigation, M.K. and A.C.; resources, A.C.; data curation, M.K.; writing—original draft preparation, M.K.; writing—review and editing, A.C.; visualization, M.K.; supervision, A.C.; project administration, M.K.; funding acquisition, M.K. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This research was supported by ZUT Highfliers School/Szkoła Orłów ZUT/project co-ordinated by Piotr Sulikowski, within the framework of the program of the Minister of Education and Science, Poland/Grant No. MNiSW/2019/391/DIR/KH, POWR.03.01.00-00-P015/18/, co-financed by the European Social Fund, the amount of financing PLN 1,704,201,66.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmed, N.; Natarajan, T.; Rao, K. Discrete Cosine Transform. IEEE Trans. Comput. 1974, C-23, 90–93. [Google Scholar] [CrossRef]
  2. Ahmed, N.; Rao, K.R. Orthogonal Transforms for Digital Signal Processing; Springer: Berlin/Heidelberg, Germany, 1975. [Google Scholar] [CrossRef]
  3. Rao, K.; Yip, P. Discrete Cosine Transform: Algorithms, Advantages, Applications; Academic Press: Cambridge, MA, USA, 1990. [Google Scholar] [CrossRef] [Green Version]
  4. Britanak, V.; Yip, P.; Rao, K. Discrete Cosine and Sine Transforms: General Properties, Fast Algorithms and Integer Approximations; Academic Press: Cambridge, MA, USA, 2007. [Google Scholar] [CrossRef]
  5. Ochoa-Domínguez, H.; Rao, K.R. Discrete Cosine Transform; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar] [CrossRef]
  6. Elliott, D.F.; Rao, K.R. Fast Transforms: Algorithms, Analyses, Applications; Academic Press: Cambridge, MA, USA, 1983. [Google Scholar]
  7. Chitprasert, B.; Rao, K.R. Discrete Cosine Transform Filtering. Signal Process. 1990, 19, 235–245. [Google Scholar] [CrossRef]
  8. Armas Vega, E.A.; Sandoval Orozco, A.L.; García Villalba, L.J.; Hernandez-Castro, J. Digital Images Authentication Technique Based on DWT, DCT and Local Binary Patterns. Sensors 2018, 18, 3372. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Krikor, L.; Baba, S.; Alnasiri, T.; Shaaban, Z. Image Encryption Using DCT and Stream Cipher. Eur. J. Sci. Res. 2009, 32, 45–57. [Google Scholar]
  10. Yang, J.; Jin, T.; Xiao, C.; Huang, X. Compressed Sensing Radar Imaging: Fundamentals, Challenges, and Advances. Sensors 2019, 19, 3100. [Google Scholar] [CrossRef] [Green Version]
  11. Lee, C.F.; Shen, J.J.; Chen, Z.R.; Agrawal, S. Self-Embedding Authentication Watermarking with Effective Tampered Location Detection and High-Quality Image Recovery. Sensors 2019, 19, 2267. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Lu, W.; Chen, Z.; Li, L.; Cao, X.; Wei, J.; Xiong, N.; Li, J.; Dang, J. Watermarking Based on Compressive Sensing for Digital Speech Detection and Recovery. Sensors 2018, 18, 2390. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Boukhechba, K.; Wu, H.; Bazine, R. DCT-Based Preprocessing Approach for ICA in Hyperspectral Data Analysis. Sensors 2018, 18, 1138. [Google Scholar] [CrossRef] [Green Version]
  14. Xu, P.; Chen, B.; Xue, L.; Zhang, J.; Zhu, L. A Prediction-Based Spatial-Spectral Adaptive Hyperspectral Compressive Sensing Algorithm. Sensors 2018, 18, 3289. [Google Scholar] [CrossRef] [Green Version]
  15. Agostini, L.; Silva, I.; Bampi, S. Pipelined fast 2D DCT architecture for JPEG image compression. In Proceedings of the Symposium on Integrated Circuits and Systems Design, Pirenopolis, Brazil, 10–15 September 2001; pp. 226–231. [Google Scholar] [CrossRef]
  16. Dhandapani, V.; Seshasayanan, R. Area and power efficient DCT architecture for image compression. J. Adv. Signal Process. 2014, 2014, 1–9. [Google Scholar] [CrossRef] [Green Version]
  17. Budagavi, M.; Fuldseth, A.; Bjøntegaard, G.; Sze, V.; Sadafale, M. Core Transform Design in the High Efficiency Video Coding (HEVC) Standard. IEEE J. Sel. Top. Signal Process. 2013, 7, 1029–1041. [Google Scholar] [CrossRef]
  18. Yip, P.C.; Rao, K.R. High Efficiency Video Coding. ITU-T Rec. H.265 and ISO/IEC 23008-2 (HEVC). Standard, ITU-T and ISO/IEC. 2013. [Google Scholar]
  19. Meher, P.; Park, S.Y.; Mohanty, B.; Lim, K.; Yeo, C. Efficient integer DCT architectures for HEVC. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 168–178. [Google Scholar] [CrossRef]
  20. Kalali, E.; Mert, A.C.; Hamzaoglu, I. A computation and energy reduction technique for HEVC Discrete Cosine Transform. IEEE Trans. Consum. Electron. 2016, 62, 166–174. [Google Scholar] [CrossRef]
  21. Pastuszak, G. Hardware architectures for the H.265/HEVC discrete cosine transform. IET Image Process. 2015, 9, 468–477. [Google Scholar] [CrossRef]
  22. Pourazad, M.T.; Doutre, C.; Azimi, M.; Nasiopoulos, P. HEVC: The New Gold Standard for Video Compression: How Does HEVC Compare with H.264/AVC? IEEE Consum. Electron. Mag. 2012, 1, 36–46. [Google Scholar] [CrossRef]
  23. Zhou, M.; Jiang, B.; Li, T.; Zhong, W.; Gao, X. DCT-based channel estimation techniques for LTE uplink. In Proceedings of the 2009 IEEE 20th International Symposium on Personal, Indoor and Mobile Radio Communications, Tokyo, Japan, 13–16 September 2009; pp. 1034–1038. [Google Scholar] [CrossRef]
  24. Ali, M.; Islam, M.; Memon, M.; Asif, D.M.; Lin, F. Optimum DCT type-I based transceiver model and effective channel estimation for uplink NB-IoT system. Phys. Commun. 2021, 48, 101431. [Google Scholar] [CrossRef]
  25. Domínguez-Jiménez, M.E.; Luengo, D.; Sansigre-Vidal, G. Estimation of Symmetric Channels for Discrete Cosine Transform Type-I Multicarrier Systems: A Compressed Sensing Approach. Sci. World J. 2015, 2015, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Domínguez-Jiménez, M.E.; Luengo, D.; Sansigre-Vidal, G.; Cruz-Roldán, F. A novel channel estimation scheme for multicarrier communications with the Type-I even discrete cosine transform. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos Island, Greece, 28 August–2 September 2017; pp. 2239–2243. [Google Scholar] [CrossRef] [Green Version]
  27. Domínguez-Jiménez, M.E.; Luengo, D.; Sansigre-Vidal, G.; Cruz-Roldán, F. A Novel Scheme of Multicarrier Modulation With the Discrete Cosine Transform. IEEE Trans. Wirel. Commun. 2021, 20, 7992–8005. [Google Scholar] [CrossRef]
  28. Cariow, A.; Makowska, M.; Strzelec, P. Small-Size FDCT/IDCT Algorithms with Reduced Multiplicative Complexity. Radioelectron. Commun. Syst. 2019, 62, 559–576. [Google Scholar] [CrossRef]
  29. Cariow, A.; Lesiecki, Ł. Small-Size Algorithms for Type-IV Discrete Cosine Transform with Reduced Multiplicative Complexity. Radioelectron. Commun. Syst. 2020, 63, 465–487. [Google Scholar] [CrossRef]
  30. Cariow, A.; Papliński, J.; Majorkowska-Mech, D. Some Structures of Parallel VLSI-Oriented Processing Units for Implementation of Small Size Discrete Fractional Fourier Transforms. Electronics 2019, 8, 509. [Google Scholar] [CrossRef] [Green Version]
  31. Britanak, V. New universal rotation-based fast computational structures for an efficient implementation of the DCT-IV/DST-IV and analysis/synthesis MDCT/MDST filter banks. Signal Process. 2009, 89, 2213–2232. [Google Scholar] [CrossRef]
  32. Britanak, V. New Recursive Fast Radix-2 Algorithm for the Modulated Complex Lapped Transform. IEEE Trans. Signal Process. 2012, 60, 6703–6708. [Google Scholar] [CrossRef]
  33. Britanak, V.; Rao, R. Two-dimensional DCT/DST universal computational structure for 2m × 2n block sizes. IEEE Trans. Signal Process. 2000, 48, 3250–3255. [Google Scholar] [CrossRef]
  34. Cariow, A. Strategies for the Synthesis of Fast Algorithms for the Computation of the Matrix-vector Products. J. Signal Process. Theory Appl. 2014, 3, 1–19. [Google Scholar] [CrossRef]
  35. Regalia, P.A.; Sanjit, M.K. Kronecker Products, Unitary Matrices and Signal Processing Applications. SIAM Rev. 1989, 31, 586–613. [Google Scholar] [CrossRef]
  36. Granata, J.; Conner, M.; Tolimieri, R. The tensor product: A mathematical programming language for FFTs and other fast DSP operations. IEEE Signal Process. Mag. 1992, 9, 40–48. [Google Scholar] [CrossRef]
  37. Cariow, A.; Paplinski, J.P. Algorithmic Structures for Realizing Short-Length Circular Convolutions with Reduced Complexity. Electronics 2021, 10, 2800. [Google Scholar] [CrossRef]
Figure 1. Signal flow graph of the algorithm for computing the 2-point DCT-I.
Figure 1. Signal flow graph of the algorithm for computing the 2-point DCT-I.
Electronics 11 02411 g001
Figure 2. Signal flow graph of the algorithm for computing the 3-point DCT-I.
Figure 2. Signal flow graph of the algorithm for computing the 3-point DCT-I.
Electronics 11 02411 g002
Figure 3. Signal flow graph of the algorithm for computing the 4-point DCT-I.
Figure 3. Signal flow graph of the algorithm for computing the 4-point DCT-I.
Electronics 11 02411 g003
Figure 4. Signal flow graph of the algorithm for computing the 5-point DCT-I.
Figure 4. Signal flow graph of the algorithm for computing the 5-point DCT-I.
Electronics 11 02411 g004
Figure 5. Signal flow graph of the algorithm for computing the 6-point DCT-I.
Figure 5. Signal flow graph of the algorithm for computing the 6-point DCT-I.
Electronics 11 02411 g005
Figure 6. Signal flow graph of the algorithm for computing the 7-point DCT-I.
Figure 6. Signal flow graph of the algorithm for computing the 7-point DCT-I.
Electronics 11 02411 g006
Figure 7. Signal flow graph of the algorithm for computing the 8-point DCT-I.
Figure 7. Signal flow graph of the algorithm for computing the 8-point DCT-I.
Electronics 11 02411 g007
Table 1. Arithmetical complexities of naïve implementation and proposed solutions.
Table 1. Arithmetical complexities of naïve implementation and proposed solutions.
Length NNumbers of Arithmetic Operations in DCT-I Algorithms
Naïve
Implementation
Proposed
Solutions
Percentage
Estimate
“×”“+”“×”“+”“×”“+”
2422250%0%
3962478%33%
4161261263%0%
5252021092%50%
63630102272%27%
7494292182%50%
86456144378%23%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kolenderski, M.; Cariow, A. Small-Size Algorithms for the Type-I Discrete Cosine Transform with Reduced Complexity. Electronics 2022, 11, 2411. https://doi.org/10.3390/electronics11152411

AMA Style

Kolenderski M, Cariow A. Small-Size Algorithms for the Type-I Discrete Cosine Transform with Reduced Complexity. Electronics. 2022; 11(15):2411. https://doi.org/10.3390/electronics11152411

Chicago/Turabian Style

Kolenderski, Miłosz, and Aleksandr Cariow. 2022. "Small-Size Algorithms for the Type-I Discrete Cosine Transform with Reduced Complexity" Electronics 11, no. 15: 2411. https://doi.org/10.3390/electronics11152411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop