You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

273 lines
12 KiB

  1. namespace Eigen {
  2. /** \page TutorialReductionsVisitorsBroadcasting Tutorial page 7 - Reductions, visitors and broadcasting
  3. \ingroup Tutorial
  4. \li \b Previous: \ref TutorialLinearAlgebra
  5. \li \b Next: \ref TutorialGeometry
  6. This tutorial explains Eigen's reductions, visitors and broadcasting and how they are used with
  7. \link MatrixBase matrices \endlink and \link ArrayBase arrays \endlink.
  8. \b Table \b of \b contents
  9. - \ref TutorialReductionsVisitorsBroadcastingReductions
  10. - \ref TutorialReductionsVisitorsBroadcastingReductionsNorm
  11. - \ref TutorialReductionsVisitorsBroadcastingReductionsBool
  12. - \ref TutorialReductionsVisitorsBroadcastingReductionsUserdefined
  13. - \ref TutorialReductionsVisitorsBroadcastingVisitors
  14. - \ref TutorialReductionsVisitorsBroadcastingPartialReductions
  15. - \ref TutorialReductionsVisitorsBroadcastingPartialReductionsCombined
  16. - \ref TutorialReductionsVisitorsBroadcastingBroadcasting
  17. - \ref TutorialReductionsVisitorsBroadcastingBroadcastingCombined
  18. \section TutorialReductionsVisitorsBroadcastingReductions Reductions
  19. In Eigen, a reduction is a function taking a matrix or array, and returning a single
  20. scalar value. One of the most used reductions is \link DenseBase::sum() .sum() \endlink,
  21. returning the sum of all the coefficients inside a given matrix or array.
  22. <table class="example">
  23. <tr><th>Example:</th><th>Output:</th></tr>
  24. <tr><td>
  25. \include tut_arithmetic_redux_basic.cpp
  26. </td>
  27. <td>
  28. \verbinclude tut_arithmetic_redux_basic.out
  29. </td></tr></table>
  30. The \em trace of a matrix, as returned by the function \c trace(), is the sum of the diagonal coefficients and can equivalently be computed <tt>a.diagonal().sum()</tt>.
  31. \subsection TutorialReductionsVisitorsBroadcastingReductionsNorm Norm computations
  32. The (Euclidean a.k.a. \f$\ell^2\f$) squared norm of a vector can be obtained \link MatrixBase::squaredNorm() squaredNorm() \endlink. It is equal to the dot product of the vector by itself, and equivalently to the sum of squared absolute values of its coefficients.
  33. Eigen also provides the \link MatrixBase::norm() norm() \endlink method, which returns the square root of \link MatrixBase::squaredNorm() squaredNorm() \endlink.
  34. These operations can also operate on matrices; in that case, a n-by-p matrix is seen as a vector of size (n*p), so for example the \link MatrixBase::norm() norm() \endlink method returns the "Frobenius" or "Hilbert-Schmidt" norm. We refrain from speaking of the \f$\ell^2\f$ norm of a matrix because that can mean different things.
  35. If you want other \f$\ell^p\f$ norms, use the \link MatrixBase::lpNorm() lpNnorm<p>() \endlink method. The template parameter \a p can take the special value \a Infinity if you want the \f$\ell^\infty\f$ norm, which is the maximum of the absolute values of the coefficients.
  36. The following example demonstrates these methods.
  37. <table class="example">
  38. <tr><th>Example:</th><th>Output:</th></tr>
  39. <tr><td>
  40. \include Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp
  41. </td>
  42. <td>
  43. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.out
  44. </td></tr></table>
  45. \subsection TutorialReductionsVisitorsBroadcastingReductionsBool Boolean reductions
  46. The following reductions operate on boolean values:
  47. - \link DenseBase::all() all() \endlink returns \b true if all of the coefficients in a given Matrix or Array evaluate to \b true .
  48. - \link DenseBase::any() any() \endlink returns \b true if at least one of the coefficients in a given Matrix or Array evaluates to \b true .
  49. - \link DenseBase::count() count() \endlink returns the number of coefficients in a given Matrix or Array that evaluate to \b true.
  50. These are typically used in conjunction with the coefficient-wise comparison and equality operators provided by Array. For instance, <tt>array > 0</tt> is an %Array of the same size as \c array , with \b true at those positions where the corresponding coefficient of \c array is positive. Thus, <tt>(array > 0).all()</tt> tests whether all coefficients of \c array are positive. This can be seen in the following example:
  51. <table class="example">
  52. <tr><th>Example:</th><th>Output:</th></tr>
  53. <tr><td>
  54. \include Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp
  55. </td>
  56. <td>
  57. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.out
  58. </td></tr></table>
  59. \subsection TutorialReductionsVisitorsBroadcastingReductionsUserdefined User defined reductions
  60. TODO
  61. In the meantime you can have a look at the DenseBase::redux() function.
  62. \section TutorialReductionsVisitorsBroadcastingVisitors Visitors
  63. Visitors are useful when one wants to obtain the location of a coefficient inside
  64. a Matrix or Array. The simplest examples are
  65. \link MatrixBase::maxCoeff() maxCoeff(&x,&y) \endlink and
  66. \link MatrixBase::minCoeff() minCoeff(&x,&y)\endlink, which can be used to find
  67. the location of the greatest or smallest coefficient in a Matrix or
  68. Array.
  69. The arguments passed to a visitor are pointers to the variables where the
  70. row and column position are to be stored. These variables should be of type
  71. \link DenseBase::Index Index \endlink, as shown below:
  72. <table class="example">
  73. <tr><th>Example:</th><th>Output:</th></tr>
  74. <tr><td>
  75. \include Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp
  76. </td>
  77. <td>
  78. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_visitors.out
  79. </td></tr></table>
  80. Note that both functions also return the value of the minimum or maximum coefficient if needed,
  81. as if it was a typical reduction operation.
  82. \section TutorialReductionsVisitorsBroadcastingPartialReductions Partial reductions
  83. Partial reductions are reductions that can operate column- or row-wise on a Matrix or
  84. Array, applying the reduction operation on each column or row and
  85. returning a column or row-vector with the corresponding values. Partial reductions are applied
  86. with \link DenseBase::colwise() colwise() \endlink or \link DenseBase::rowwise() rowwise() \endlink.
  87. A simple example is obtaining the maximum of the elements
  88. in each column in a given matrix, storing the result in a row-vector:
  89. <table class="example">
  90. <tr><th>Example:</th><th>Output:</th></tr>
  91. <tr><td>
  92. \include Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp
  93. </td>
  94. <td>
  95. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_colwise.out
  96. </td></tr></table>
  97. The same operation can be performed row-wise:
  98. <table class="example">
  99. <tr><th>Example:</th><th>Output:</th></tr>
  100. <tr><td>
  101. \include Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp
  102. </td>
  103. <td>
  104. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_rowwise.out
  105. </td></tr></table>
  106. <b>Note that column-wise operations return a 'row-vector' while row-wise operations
  107. return a 'column-vector'</b>
  108. \subsection TutorialReductionsVisitorsBroadcastingPartialReductionsCombined Combining partial reductions with other operations
  109. It is also possible to use the result of a partial reduction to do further processing.
  110. Here is another example that finds the column whose sum of elements is the maximum
  111. within a matrix. With column-wise partial reductions this can be coded as:
  112. <table class="example">
  113. <tr><th>Example:</th><th>Output:</th></tr>
  114. <tr><td>
  115. \include Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp
  116. </td>
  117. <td>
  118. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_maxnorm.out
  119. </td></tr></table>
  120. The previous example applies the \link DenseBase::sum() sum() \endlink reduction on each column
  121. though the \link DenseBase::colwise() colwise() \endlink visitor, obtaining a new matrix whose
  122. size is 1x4.
  123. Therefore, if
  124. \f[
  125. \mbox{m} = \begin{bmatrix} 1 & 2 & 6 & 9 \\
  126. 3 & 1 & 7 & 2 \end{bmatrix}
  127. \f]
  128. then
  129. \f[
  130. \mbox{m.colwise().sum()} = \begin{bmatrix} 4 & 3 & 13 & 11 \end{bmatrix}
  131. \f]
  132. The \link DenseBase::maxCoeff() maxCoeff() \endlink reduction is finally applied
  133. to obtain the column index where the maximum sum is found,
  134. which is the column index 2 (third column) in this case.
  135. \section TutorialReductionsVisitorsBroadcastingBroadcasting Broadcasting
  136. The concept behind broadcasting is similar to partial reductions, with the difference that broadcasting
  137. constructs an expression where a vector (column or row) is interpreted as a matrix by replicating it in
  138. one direction.
  139. A simple example is to add a certain column-vector to each column in a matrix.
  140. This can be accomplished with:
  141. <table class="example">
  142. <tr><th>Example:</th><th>Output:</th></tr>
  143. <tr><td>
  144. \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp
  145. </td>
  146. <td>
  147. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.out
  148. </td></tr></table>
  149. We can interpret the instruction <tt>mat.colwise() += v</tt> in two equivalent ways. It adds the vector \c v
  150. to every column of the matrix. Alternatively, it can be interpreted as repeating the vector \c v four times to
  151. form a four-by-two matrix which is then added to \c mat:
  152. \f[
  153. \begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix}
  154. + \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix}
  155. = \begin{bmatrix} 1 & 2 & 6 & 9 \\ 4 & 2 & 8 & 3 \end{bmatrix}.
  156. \f]
  157. The operators <tt>-=</tt>, <tt>+</tt> and <tt>-</tt> can also be used column-wise and row-wise. On arrays, we
  158. can also use the operators <tt>*=</tt>, <tt>/=</tt>, <tt>*</tt> and <tt>/</tt> to perform coefficient-wise
  159. multiplication and division column-wise or row-wise. These operators are not available on matrices because it
  160. is not clear what they would do. If you want multiply column 0 of a matrix \c mat with \c v(0), column 1 with
  161. \c v(1), and so on, then use <tt>mat = mat * v.asDiagonal()</tt>.
  162. It is important to point out that the vector to be added column-wise or row-wise must be of type Vector,
  163. and cannot be a Matrix. If this is not met then you will get compile-time error. This also means that
  164. broadcasting operations can only be applied with an object of type Vector, when operating with Matrix.
  165. The same applies for the Array class, where the equivalent for VectorXf is ArrayXf. As always, you should
  166. not mix arrays and matrices in the same expression.
  167. To perform the same operation row-wise we can do:
  168. <table class="example">
  169. <tr><th>Example:</th><th>Output:</th></tr>
  170. <tr><td>
  171. \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp
  172. </td>
  173. <td>
  174. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.out
  175. </td></tr></table>
  176. \subsection TutorialReductionsVisitorsBroadcastingBroadcastingCombined Combining broadcasting with other operations
  177. Broadcasting can also be combined with other operations, such as Matrix or Array operations,
  178. reductions and partial reductions.
  179. Now that broadcasting, reductions and partial reductions have been introduced, we can dive into a more advanced example that finds
  180. the nearest neighbour of a vector <tt>v</tt> within the columns of matrix <tt>m</tt>. The Euclidean distance will be used in this example,
  181. computing the squared Euclidean distance with the partial reduction named \link MatrixBase::squaredNorm() squaredNorm() \endlink:
  182. <table class="example">
  183. <tr><th>Example:</th><th>Output:</th></tr>
  184. <tr><td>
  185. \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp
  186. </td>
  187. <td>
  188. \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.out
  189. </td></tr></table>
  190. The line that does the job is
  191. \code
  192. (m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
  193. \endcode
  194. We will go step by step to understand what is happening:
  195. - <tt>m.colwise() - v</tt> is a broadcasting operation, subtracting <tt>v</tt> from each column in <tt>m</tt>. The result of this operation
  196. is a new matrix whose size is the same as matrix <tt>m</tt>: \f[
  197. \mbox{m.colwise() - v} =
  198. \begin{bmatrix}
  199. -1 & 21 & 4 & 7 \\
  200. 0 & 8 & 4 & -1
  201. \end{bmatrix}
  202. \f]
  203. - <tt>(m.colwise() - v).colwise().squaredNorm()</tt> is a partial reduction, computing the squared norm column-wise. The result of
  204. this operation is a row-vector where each coefficient is the squared Euclidean distance between each column in <tt>m</tt> and <tt>v</tt>: \f[
  205. \mbox{(m.colwise() - v).colwise().squaredNorm()} =
  206. \begin{bmatrix}
  207. 1 & 505 & 32 & 50
  208. \end{bmatrix}
  209. \f]
  210. - Finally, <tt>minCoeff(&index)</tt> is used to obtain the index of the column in <tt>m</tt> that is closest to <tt>v</tt> in terms of Euclidean
  211. distance.
  212. \li \b Next: \ref TutorialGeometry
  213. */
  214. }