Conservative matrix fields are defined through a pair of matrices \( M_x, M_y\) that satisfy the conservative property:
\(M_X(x,y) M_Y(x+1,y) = M_Y(x,y) M_X(x,y+1)\)
We have created such matrix fields by either generalizing an infinite family of polynomial continued fraction formulas, or by instituting the structure \( f, \bar{f} \) representation and using symbolic analysis algorithms (like Mathematica or sympy).
As the degree of conservative matrix fields grows, symbolic analysis algorithms lead to longer computation time and higher resource requirements, making their use impractical. Furthermore, any attempt to search for more general structures then the \( f, \bar{f} \) case increases the number of degrees of freedom in the problem, making it even more complex.
One approach we propose for coping with this complexity is to define a search for a conservative matrix field as an optimization problem. Optimization algorithms, like gradient decent, function exceptionally well for problems with many degrees of freedom like the problem we have here.
First, we denote the degree of the conservative matrix field as \(d\) and define the matrices through the optimization parameters \( \theta \):
\(
M_X(x,y) = \begin{pmatrix}
\sum\limits_{i=0}^{d} \sum\limits_{j=0}^{d-i} \theta_{ij}^{x,1,1} x^i y^j &
\sum\limits_{i=0}^{d} \sum\limits_{j=0}^{d-i} \theta_{ij}^{x,1,2} x^i y^j \\
\sum\limits_{i=0}^{d} \sum\limits_{j=0}^{d-i} \theta_{ij}^{x,2,1} x^i y^j &
\sum\limits_{i=0}^{d} \sum\limits_{j=0}^{d-i} \theta_{ij}^{x,2,2} x^i y^j \\
\end{pmatrix}
\)
Similarly, we define \(M_Y\) through \(\theta^y\). This notation ensures that the maximal degree of every element in these matrices is at most \(d\). Then, we use the conservative property to get the parameters \(\vec{\theta}\) through the following optimization problem:
\(
\vec{\theta}^* = \underset{\vec{\theta}, \lambda}{\mathrm{argmin}}
\left\Vert
M_X(x,y) M_Y(x+1,y) – M_Y(x,y) M_X(x,y+1)
\right\Vert _ \mathcal{F}
+ \lambda \left( \left| \vec{\theta} \right|^2 -1 \right)
\)
where \(\left\Vert \cdot \right\Vert _\mathcal{F} \) is the Frobenius norm, which is the square root of the sum of squares of the matrix’s elements. The term \(\lambda \left( \left| \vec{\theta} \right|^2 -1 \right)\) ensures that the trivial solution \(\vec{\theta} = \vec{0}\) is not received. The resulting \(\vec{\theta}^*\) defines a new conservative matrix field.
Solving the optimization problem numerically gives rise to new challenges. Since we do not expect an optimization algorithm to get to a global minimum, the matrices defined using \(\vec{\theta}^*\) do not fully satisfy the conservative property. We can however use the realization that \(\vec{\theta}^*\) is close to a “true \(\vec{\theta}\)” that satisfies the conservative property completely and use \(\vec{\theta}^*\) to “guess” a true \(\vec{\theta}\).
In a conservative matrix field, the coefficients \(\vec{\theta}\) are rational numbers. We can substitute the approximated result \(\vec{\theta}^*_i\) with similar rational values with small denominators that are the most likely result. Finally, we can verify the proposed \(\vec{\theta}\) using the conservative property and prove that it defines a valid conservative matrix field.
Join the Ramanujan Machine team and develop such an algorithm! New conservative matrix fields of higher (\(\gt 3\)) degree will have tremendous impact!