Analysis on generalized Clifford algebras

Cover Page


Cite item

Full Text

Abstract

In this article, we study the analysis related to generalized Clifford algebras Cna, where a is a non-zero vector. If e1,...,en is an orthonormal basis, the multiplication is defined by relations
ej2=ajej-1,
eiej+ejei=aiej+ajei,
for aj=ej·a. The case a=0 corresponds to the classical Clifford algebra. We define the Dirac operator as usual by D=jejxj and define regular functions as its null solution. We first study the algebraic properties of the algebra. Then we prove the basic formulas for the Dirac operator and study the properties of regular functions.

Full Text

1. Introduction

Clifford algebras are frequently encountered algebraic structures in both mathematics and applications. In recent decades, one key application of the field has been in the formation of higher-dimensional analysis. This branch of mathematics is known as Clifford analysis. Since the starting point of Clifford algebras is located in complex numbers, complex analysis serves as a starting point and motivation for Clifford analysis.

Both in applications, but perhaps often better among mathematicians, there is an effort to look at the generalizations of mathematical theories. Clifford analysis can be generalized in several ways. Each generalization gives a new perspective on a classic case. One way is to generalize Clifford's algebras themselves, and there are numerous articles to be published from this point of view. It would be futile to attempt to list them, given the large number.

Let us return to the complex analysis. Isaak Moiseevitch Yaglom introduced the following generalization for complex numbers in [1]. His idea was that the imaginary unit $i$ satisfies the quadratic equation
\[ \begin{equation*}
x^2=px+q
\end{equation*} \]
for $p$, $q\in\mathbb{R}$. This leads to different generalizations of complex numbers with different choices of parameters $p$ and $q$. From the point of view of complex analysis, it is natural to look at the generalization, where the values of the functions are in these generalized complex numbers. For example, the invertibility of elements is lost with some of the parameter choices, which naturally significantly affects the structure of the theory. In addition, the counterpart of the holomorphic functions naturally becomes different.

Like complex numbers, Clifford algebras are also based on a quadratic form. One way to generalize them is to define a quadratic equation like Yaglom did. Naturally, this is not quite as straightforward as in the case of complex numbers. This article follows the idea introduced by Teruo Kanzaki in his article [2]. Later, Jacques Helmstetter, Artibano Micali, and Philippe Revoy continued by looking at generalized Clifford algebras in [3]. Kanzaki's idea, like Yaglom's, was to expand a quadratic equation with a term determined by a linear form. We will come back to this later. Later Wolfgang Tutschke and Carmen Judith Vanegas, when modeling boundary value problems, defined generalized Clifford algebras without mentioning Kanzaki in [4].

This article examines the generalization of the Clifford analysis to the special case mentioned above. However, it is more like the first steps in this direction. In classic Clifford analysis, the interplay of vector variables and operators is central. This means that the theory can be written very far to the end without component representations. In the author's opinion, this is also a good requirement for a generalized Clifford analysis.

The structure of the article is as follows:

  • Section 2 recalls the construction of orthogonal Clifford algebras. The examination is limited to Euclidean spaces $\mathbb{R}^n$.
  • Section 3 defines generalized Clifford algebras as in [3]. After that, algebraic fundamental properties are studied.
  • Section 4 is algebraic and examines the difference related to the power of a vector variable.
  • Section 5 defines the Dirac operator and defines regular functions as its zero solutions. The connection with the Laplace operator is studied.
  • Section 6 examines two simple cases as examples. The examples highlight the difference between the generalized and the classical case.
  • Section 7 discusses Cauchy's integral formula.
  • In Section 8, more regular functions are derived using the Cauchy kernel.

2. Praefatio necessaria: Clifford algebras over quadratic spaces

A universal Clifford algebra is an algebra associated with a quadratic space $(\mathbb{R}^n,Q)$, denoted by $\mathcal{C}\ell(\mathbb{R}^n,Q)$ or just $\mathcal{C}\ell(\mathbb{R}^n)$, which satisfies the condition
\[ \begin{equation*}
\underline{x}^2=Q(\underline{x})
\end{equation*} \]
for any $\underline{x}\in\mathbb{R}^n$. Moreover, its dimension is $2^n$. A quadratic form $Q$ is supposed to be associated with a bilinear form
\[ \begin{equation*}
B(\underline{x},\underline{y})=\frac{1}{2}\big(Q(\underline{x}+\underline{y})-Q(\underline{x})-Q(\underline{y})\big).
\end{equation*} \]
With this, we obtain the product rule between the vectors
\[ \begin{equation*}
\underline{x}\underline{y}+\underline{y}\underline{x}=2B(\underline{x},\underline{y}).
\end{equation*} \]
In the Clifford analysis, we usually choose 
\[ \begin{equation*}
Q(\underline{x})=-|\underline{x}|^2,
\end{equation*} \]
and then
\[ \begin{equation*}
B(\underline{x},\underline{y})=-\underline{x}\cdot\underline{y},
\end{equation*} \]
where $|\underline{x}|^2=x_1^2+\cdots+x_n^2$ and $\underline{x}\cdot\underline{y}=x_1y_1+\cdots+x_ny_n$. The corresponding Clifford algebra is denoted by $\mathbb{R}_{0,n}$. By defining an orthonormal basis $\{e_1, \dots ,e_n\}$, we get
\[ \begin{align*}
    e_j^2=-1,& & \text{ for } j=1, \dots ,n,\\
    e_ie_j+e_je_i=0,& & \text{ for } i,j=1, \dots ,n \text{ and }i\neq j.
\end{align*} \]
A complete presentation of algebraic theory of Clifford algebras can be found, for example, in [5–7].

3. Generalized Clifford algebras

Consider $\mathbb{R}^n$ with a quadratic form $Q:\mathbb{R}^n\to\mathbb{R}$. Let $B:\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}$ be its associated bilinear form and $P:\mathbb{R}^n\to\mathbb{R}$ a linear form. In this case, $\mathbb{R}^n$ is called a generalized quadratic space. Generalized Clifford algebras or Clifford–Kanzaki algebras are generated by the relation
\[ \begin{equation*}
\underline{x}^2=P(\underline{x})\underline{x}+Q(\underline{x})
\end{equation*} \]
for $\underline{x}\in\mathbb{R}^n$. This gives the product rule
\[ \begin{equation*}
\underline{x}\underline{y}+\underline{y}\underline{x}=P(\underline{x})\underline{y}+P(\underline{y})\underline{x}+2B(\underline{x},\underline{y}),
\end{equation*} \]
where $\underline{x},\underline{y}\in\mathbb{R}^n$. The Riesz representation theorem states that a linear form $P$ admits a unique representation by the Euclidean inner product in the form
\[ \begin{equation*}
P(\underline{x})=\underline{a}\cdot\underline{x}
\end{equation*} \]
for some $\underline{a}\in\mathbb{R}^n$. A canonical choice for a quadratic form is $Q(\underline{x})=-|\underline{x}|^2$. The generalized Clifford algebra generated by 
\[ \begin{equation}
  \underline{x}^2=(\underline{a}\cdot\underline{x})\underline{x}-|\underline{x}|^2  
\end{equation} \tag{1} \]
for some $\underline{a}\in\mathbb{R}^n$ is denoted by $\mathcal{C}_n(\underline{a})$. Let $\{e_1, \dots ,e_n\}$ be an orthonormal basis in $\mathbb{R}^n$ and $a_j=\underline{a}\cdot e_j$. Then the multiplication rules are
\[ \begin{equation*}
    \phantom{e_ie_j+e_je_i}e_j^2=a_je_j-1,
\end{equation*} \]
\[ \begin{equation}
    e_ie_j+e_je_i=a_ie_j+a_je_i,
\end{equation} \tag{2} \]
where $i, j=1, \dots ,n$ and $i\neq j$. Defining paravectors $\varepsilon_j=e_j-a_j$, the multiplication rules takes the form
\[ \begin{equation}
    e_j\varepsilon_j=\varepsilon_je_j=-1,
\end{equation} \tag{3} \]
\[ \begin{equation*}
    \phantom{-\,}\varepsilon_ie_j+\varepsilon_je_i=0,
\end{equation*} \]
\[ \begin{equation}
    \phantom{-\,}e_i\varepsilon_j+e_j\varepsilon_i=0.
\end{equation} \tag{4} \]
We define an algebra endomorphism $\ \widetilde{}:e_j\mapsto\varepsilon_j$. Since $\widetilde{\widetilde{e}}_j=e_j-2a_j$, we observe, that it is not an involution. 

Proposition 3.1. If $\underline{x}\in\mathbb{R}^n$, then
\[ \begin{equation*}
\widetilde{\underline{x}}=\underline{x}-\underline{a}\cdot \underline{x}
\end{equation*} \]
and
\[ \begin{equation*}
\underline{x}\;\widetilde{\underline{x}}=\widetilde{\underline{x}}\;\underline{x}=-|\underline{x}|^2.
\end{equation*} \]

Proof. If 
\[ \begin{equation*}
\underline{x}=\sum_{j=1}^nx_je_j,
\end{equation*} \]
then
\[ \begin{equation*}
    \widetilde{\underline{x}}=\sum_{j=1}^nx_j\varepsilon_j=\sum_{j=1}^n e_jx_j-\sum_{j=1}^na_jx_j=\underline{x}-\underline{a}\cdot \underline{x}.
\end{equation*} \]
From (1), we obtain $\underline{x}(\underline{x}-(\underline{a}\cdot\underline{x}))=(\underline{x}-(\underline{a}\cdot\underline{x}))\underline{x}=-|\underline{x}|^2$. $\square$

Corollary 3.1. If $\underline{x}\neq 0$, then
\[ \begin{equation*}
\underline{x}^{-1}=-\frac{\widetilde{\underline{x}}}{|\underline{x}|^2}.
\end{equation*} \]

Proposition 3.2. Let $x=x_0+\underline{x}$ be a paravector. If $x_0^2+x_0(\underline{a}\cdot \underline{x})+|\underline{x}|^2\neq 0$, then
\[ \begin{equation*}
    x^{-1}=\frac{x_0-\underline{x}+\underline{a}\cdot \underline{x}}{x_0^2+x_0(\underline{a}\cdot \underline{x})+|\underline{x}|^2}.
\end{equation*} \]

Proof. We calculate
\[ \begin{align*}
    x(x_0-\widetilde{\underline{x}})&=(x_0+\underline{x})(x_0-\widetilde{\underline{x}})
    =x_0^2-x_0\widetilde{\underline{x}}+x_0\underline{x}-\underline{x}\widetilde{\underline{x}}=\\
    &=x_0^2-x_0(\underline{x}-\underline{a}\cdot \underline{x})+x_0\underline{x}-\underline{x}\widetilde{\underline{x}}
    =x_0^2+x_0(\underline{a}\cdot \underline{x})+|\underline{x}|^2.
\end{align*} \] $\square$

If $\underline{a}\neq \underline{0}$, a generalized Clifford algebra $\mathcal{C}_n(\underline{a})$ does not have direct sum representation by multivectors. We denote $\mathcal{C}_n^{(0)}(\underline{a})=\mathbb{R}$ and $\mathcal{C}_n^{(1)}(\underline{a})=\mathbb{R}^n$. Consider the subspace 
\[ \begin{equation*}
\mathcal{C}_n^{(2)}(\underline{a})=\text{Span}\{e_ie_j: i,j=1, \dots ,n\ \text{ and } i\neq j\}. 
\end{equation*} \]
Multiplication rule (2) states that in addition to the bivectors, the set contains vectors. 
We can represent it defining
\[ \begin{equation*}
\vec{\mathcal{C}}_n^{(2)}(\underline{a})=\text{Span}\{e_ie_j:  i< j\}
\end{equation*} \]
and then
\[ \begin{equation*}
\mathcal{C}_n^{(2)}(\underline{a})=\vec{\mathcal{C}}_n^{(2)}(\underline{a})\oplus \mathbb{R}^n. 
\end{equation*} \]
Indeed, if $B\in \mathcal{C}_n^{(2)}(\underline{a})$, using (2) we obtain the representation
\[ \begin{equation*}
    B=\sum_{i\neq j} b_{ij}e_ie_j=\sum_{i<j} (b_{ij}-b_{ji})e_ie_j+\sum_{i<j} b_{ji}(a_je_i+a_ie_j).
\end{equation*} \]
Similarly, for any $k=2, \dots ,n$, we can represent
\[ \begin{equation*}
\mathcal{C}_n^{(k)}(\underline{a})=\vec{\mathcal{C}}_n^{(k)}(\underline{a})\oplus\cdots\oplus\vec{\mathcal{C}}_n^{(2)}(\underline{a})\oplus \mathbb{R}^n,
\end{equation*} \]
where $\mathcal{C}_n^{(k)}(\underline{a})$ is spanned by all products of $k$ basis vectors and $\vec{\mathcal{C}}_n^{(j)}(\underline{a})$ is spanned by all products of $j$ basis vectors in increasing order. 

Another consideration is that the vector $\underline{a}\neq \underline{0}$ can be used to divide space by
\[ \begin{equation*}
\mathbb{R}^n=V(\underline{a})\oplus \text{Span}\{\underline{a}\},
\end{equation*} \]
where
\[ \begin{equation*}
V(\underline{a})=\text{Span}\{\underline{a}\}^{\perp}=\{\underline{x}\in\mathbb{R}^n : \underline{a}\cdot \underline{x}=0\}.
\end{equation*} \]
If $\underline{x}\in V(\underline{a}) $, then $\widetilde{\underline{x}}=\underline{x}$ and $\underline{x}^2=-|\underline{x}|^2$. We have
\[ \begin{equation*}
\mathcal{C}\ell(V(\underline{a}))\cong \mathbb{R}_{0,n-1}.
\end{equation*} \]

4. Powers of vectors

Let us look at algebraic differences a bit more. In a Clifford algebra, the powers $\underline{x}^k$ for $k\in\mathbb{N}$, are easily calculated and they are always either scalars or vectors. In the generalized case, the situation is very different.

From the definition of multiplication, we have
\[ \begin{equation*}
    \underline{x}^2=-|\underline{x}|^2+(\underline{a}\cdot\underline{x})\underline{x}.
\end{equation*} \]

Proposition 4.1. Let $A$, $B\in\mathbb{R}$ and $\underline{x}\in\mathbb{R}^n$. Hence
\[ \begin{equation*}
(A+B\underline{x})\underline{x}=-B|\underline{x}|^2+\big(A+B(\underline{a}\cdot \underline{x})\big)\underline{x},
\end{equation*} \]
that is, all the powers $\underline{x}^k$ are proper paravectors, that is, they have a non-zero scalar and vector part.

We calculate
\[ \begin{align*}
    (A+B\underline{x})\underline{x}&=A\underline{x}+B\underline{x}^2
    =A\underline{x}+B(-|\underline{x}|^2+(\underline{a}\cdot \underline{x})\underline{x})=\\
    &=A\underline{x}-B|\underline{x}|^2+B(\underline{a}\cdot \underline{x})\underline{x}
    =-B|\underline{x}|^2+\big(A+B(\underline{a}\cdot \underline{x})\big)\underline{x}.
\end{align*} \] $\square$

We get the following recursive representation for the powers.

Proposition 4.2. If $\underline{x}\in\mathbb{R}^n$, then
\[ \begin{equation*}
\underline{x}^k=P_k(\underline{x})+Q_k(\underline{x})\underline{x},
\end{equation*} \]
where
\[ \begin{align*}
    P_j(\underline{x})&=-Q_{j-1}(\underline{x})|\underline{x}|^2, \\
    Q_j(\underline{x})&=P_{j-1}(\underline{x})+Q_{j-1}(\underline{x})(\underline{a}\cdot \underline{x}),
\end{align*} \]
starting from $P_1(\underline{x})=0$ and $Q_1(\underline{x})=1$.

Proof. The first step is 
\[ \begin{align*}
    P_2(\underline{x})&=-Q_{1}(\underline{x})|\underline{x}|^2=-|\underline{x}|^2, \\
    Q_2(\underline{x})&=P_{1}(\underline{x})+Q_{1}(\underline{x})(\underline{a}\cdot \underline{x})=\underline{a}\cdot \underline{x},
\end{align*} \]
and we obtain
\[ \begin{align*}
    \underline{x}^2=P_2(\underline{x})+Q_2(\underline{x})\underline{x}=-|\underline{x}|^2+(\underline{a}\cdot \underline{x})\underline{x}.
\end{align*} \]
Assume
\[ \begin{equation*}
\underline{x}^k=P_k(\underline{x})+Q_k(\underline{x})\underline{x}.
\end{equation*} \]
Using the preceding proposition, we calculate
\[ \begin{align*}
  \underline{x}^{k+1}&=(P_k(\underline{x}) +Q_k(\underline{x})\underline{x})\underline{x}=\\
  &=-Q_k(\underline{x})|\underline{x}|^2+\big(P_k(\underline{x})+Q_k(\underline{x})(\underline{a}\cdot \underline{x})\big)\underline{x},
\end{align*} \]
that is
\[ \begin{align*}
    P_{k+1}(\underline{x})&=-Q_k(\underline{x})|\underline{x}|^2,\\
    Q_{k+1}(\underline{x})&=P_k(\underline{x})+Q_k(\underline{x})(\underline{a}\cdot \underline{x}).
\end{align*} \] $\square$

We observe, that the homogeneous polynomials $P_k$ and $Q_k$ are generated by $|\underline{x}|^2$ and $\underline{a}\cdot \underline{x}$. For example,
\[ \begin{align*}
    P_2(\underline{x})&=-|\underline{x}|^2, \\
    P_3(\underline{x})&=-(\underline{a}\cdot \underline{x})|\underline{x}|^2, \\
    P_4(\underline{x})&=|\underline{x}|^4-(\underline{a}\cdot \underline{x})^2|\underline{x}|^2, \\
    Q_2(\underline{x})&=\underline{a}\cdot \underline{x}, \\
    Q_3(\underline{x})&=-|\underline{x}|^2+(\underline{a}\cdot \underline{x})^2, \\
    Q_4(\underline{x})&=-2(\underline{a}\cdot \underline{x})|\underline{x}|^2+(\underline{a}\cdot \underline{x})^3. \\
\end{align*} \]

5. Dirac operators and regular functions

We define the Dirac operator by
\[ \begin{equation*}
D=\sum_{j=1}^n e_j\partial_{x_j}.
\end{equation*} \]
Let $\Omega\subset\mathbb{R}^n$ be an open subset and $f:\Omega\to\mathcal{C}_n(\underline{a})$ a differentiable function. If $Df=0$ in $\Omega$, the function $f$ is called (left) regular, and respectively $fD=0$ is called right regular. We define
\[ \begin{equation*}
 \widetilde{D}=\sum_{j=1}^n \varepsilon_j\partial_{x_j}=D- \underline{a}\cdot D,
\end{equation*} \]
where $\underline{a}\cdot D$ is the directional derivative along $\underline{a}$.

Remark 5.1 (Monogenic functions). If $\underline{a}=\underline{0}$, we consider functions $f:\Omega\to\mathbb{R}_{0,n}$. This is the Clifford analysis case. Then the solutions $Df=0$ (or $fD=0$) are called left (or right) monogenics.

Proposition 5.1. If $\underline{x}\in\mathbb{R}^n$, then
\[ \begin{equation}
    D\widetilde{\underline{x}}=\widetilde{D}\underline{x}=-n,
\end{equation} \tag{5} \]
\[ \begin{equation*}
    D\underline{x}=-n+\underline{a},\phantom{=}
\end{equation*} \]
and if $\underline{x}\neq \underline{0}$, then
\[ \begin{equation*}
    D\underline{x}^{-1}=\frac{n-2}{|\underline{x}|^2}.
\end{equation*} \]

Proof. Using (3), we calculate
\[ \begin{equation*}
    D\widetilde{\underline{x}}=\sum_{i,j=1}^n e_i\varepsilon_j \partial_{x_i}x_j=\sum_{j=1}^n e_j\varepsilon_j=-n.
\end{equation*} \]
Similarly, we have $\widetilde{D}\underline{x}=-n$. Since $\underline{x}=\widetilde{\underline{x}}+\underline{a}\cdot \underline{x}$, we have
\[ \begin{equation*}
    D\underline{x}=D\widetilde{\underline{x}}+D(\underline{a}\cdot \underline{x})=-n+\underline{a}.
\end{equation*} \]
If $\underline{x}\neq \underline{0}$, then we have
\[ \begin{equation*}
    D\underline{x}^{-1}=-D\frac{\widetilde{\underline{x}}}{|\underline{x}|^2}
    =-\frac{D\widetilde{\underline{x}}}{|\underline{x}|^2}-D\frac{1}{|\underline{x}|^2}\widetilde{\underline{x}}
    =\frac{n}{|\underline{x}|^2}+2\frac{\underline{x}\widetilde{\underline{x}}}{|\underline{x}|^4}
    =\frac{n-2}{|\underline{x}|^2}.
\end{equation*} \] $\square$

We call the constant $-n+\underline{a}$ an abstract dimension of the generalized quadratic space $\mathbb{R}^n$. 

Proposition 5.2. $D\underline{x}^2=(-n+\underline{a})(\underline{a}\cdot\underline{x})+(-2+\underline{a})\underline{x}$.

Proof. Since $\underline{x}^2=(\underline{a}\cdot\underline{x})\underline{x}-|\underline{x}|^2$ and $D\underline{x}=-n+\underline{a}$, we calculate
\[ \begin{equation*}
    D\underline{x}^2=-D|\underline{x}|^2+D(\underline{a}\cdot\underline{x})\underline{x}+(\underline{a}\cdot\underline{x})D\underline{x}
    =-2\underline{x}+\underline{a}\underline{x}+(\underline{a}\cdot\underline{x})(-n+\underline{a})
    =(-2+\underline{a})\underline{x}+(\underline{a}\cdot\underline{x})(-n+\underline{a}).
\end{equation*} \] $\square$

Recall that the Euler operator is defined by
\[ \begin{equation*}
E=\sum_{j=1}^nx_j\partial_{x_j}.
\end{equation*} \]
Then we can prove the following product rule for the Dirac operator.

Proposition 5.3. If $f$ is a differentiable function taking values in $\mathcal{C}_n(\underline{a})$, then
\[ \begin{align*}
    D(\underline{x}f)&=(-n+\underline{a})f-\widetilde{\underline{x}}Df-2Ef+\underline{x}(\underline{a}\cdot D)f, \\
    \widetilde{D}(\underline{x}f)&=-nf-\widetilde{\underline{x}}Df-2Ef , \\
    D(\widetilde{\underline{x}}f)&=-nf-\underline{x}\widetilde{D}f-2Ef.
\end{align*} \]

Proof. We calculate
\[ \begin{equation*}
    D(\widetilde{\underline{x}}f)=(D\widetilde{\underline{x}})f+\sum_{i,j=1}^n e_i\varepsilon_jx_j\partial_{x_i}f.
\end{equation*} \]
Using (3), (4) and (5), we obtain
\[ \begin{equation*}
D(\widetilde{\underline{x}}f)=-nf-\sum_{i,j=1}^n e_j\varepsilon_ix_j\partial_{x_i}f+2\sum_{j=1}^n e_j\varepsilon_jx_j\partial_{x_j}f =-nf-\underline{x}\widetilde{D}f-2Ef.
\end{equation*} \]
Since $\underline{x}=\widetilde{\underline{x}}+\underline{a}\cdot \underline{x}$, we have
\[ \begin{equation*}
    D(\underline{x}f)=D((\widetilde{\underline{x}}+\underline{a}\cdot \underline{x})f)
    =D(\widetilde{\underline{x}}f)+D((\underline{a}\cdot \underline{x})f)
    =(-n+\underline{a})f-\underline{x}\widetilde{D}f-2Ef+(\underline{a}\cdot \underline{x})Df
    =(-n+\underline{a})f-\widetilde{\underline{x}}Df-2Ef+\underline{x}(\underline{a}\cdot D)f.
\end{equation*} \]
Moreover,
\[ \begin{equation*}
    \widetilde{D}(\underline{x}f)=D(\underline{x}f)- \underline{a}f - \underline{x}(\underline{a}\cdot D)f=-nf-\widetilde{\underline{x}}Df-2Ef.
\end{equation*} \] $\square$

Using the preceding operators, we can factorize the Laplacian
\[ \begin{equation*}
\Delta=\sum_{j=1}^n\partial_{x_j}^2
\end{equation*} \]
as usual.

Proposition 5.4. $D\widetilde{D}=\widetilde{D}D=-\Delta$.

Proof. Let $f$ be a twice differentiable function. We calculate
\[ \begin{align*}
    D\widetilde{D}f&=\sum_{i,j=1}^n e_i\varepsilon_j\partial_{x_i}\partial_{x_j}f
    =\sum_{i<j} e_i\varepsilon_j\partial_{x_i}\partial_{x_j}f+\sum_{j=1}^n e_j\varepsilon_j\partial_{x_j}^2f+\sum_{i>j} e_i\varepsilon_j\partial_{x_i}\partial_{x_j}f=\\
    &=\sum_{i<j} e_i\varepsilon_j\partial_{x_i}\partial_{x_j}f-\sum_{j=1}^n \partial_{x_j}^2f+\sum_{i<j} e_j\varepsilon_i\partial_{x_i}\partial_{x_j}f
    =\sum_{i<j} (e_i\varepsilon_j+e_j\varepsilon_i)\partial_{x_i}\partial_{x_j}f-\Delta f=-\Delta f,
\end{align*} \]
where we use (3) and (4). Similarly, we calculate $\widetilde{D}D=-\Delta$. $\square$

This property allows us to prove the following classical results.

Proposition 5.5. If $f:\Omega\to\mathcal{C}_n(\underline{a})$ is regular, its component functions are harmonic.

Proposition 5.6. If $f:\Omega\to\mathcal{C}_n(\underline{a})$ is harmonic, then
\[ \begin{equation*}
Df- (\underline{a}\cdot D)f
\end{equation*} \]
is regular in $\Omega$.

From Proposition 5.3, we obtain the following results.

Proposition 5.7. If $f:\Omega\to\mathcal{C}_n(\underline{a})$ is regular, then

  1. $\widetilde{D}(\underline{x}f)=-nf-2Ef$,
  2. $\Delta(\underline{x}f)=0$, that is, $\underline{x}f$ is harmonic.

Proposition 5.8. If $f:\Omega\to\mathcal{C}_n(\underline{a})$ satisfies $\widetilde{D}f=0$, then

  1. $D(\widetilde{\underline{x}}f)=-nf-2Ef$,
  2. $\Delta(\widetilde{\underline{x}}f)=0$, that is, $\widetilde{\underline{x}}f$ is harmonic.

6. Vector and paravector-valued solutions

Let us look at two examples in this section. The examples illustrate the role of the vector $\underline{a}$ among the regular functions.

Proposition 6.1. Consider a vector valued differentiable function
\[ \begin{equation*}
f(\underline{x})=\sum_{j=1}^n e_j f_j(\underline{x}).
\end{equation*} \]
Then
\[ \begin{equation*}
    Df=\sum_{i<j} e_i e_j(\partial_{x_i}f_j-\partial_{x_j}f_i)+(\underline{a}\cdot D)f-D\cdot f.
\end{equation*} \]
Hence, $f$ is regular if and only if
\[ \begin{equation*}
    \partial_{x_i}f_j=\partial_{x_j}f_i,\quad (\underline{a}\cdot D)f=0,\quad D\cdot f=0.
\end{equation*} \]

Proof. We substitute $e_j=\varepsilon_j+a_j$ and we have
\[ \begin{equation*}
f=\sum_{j=1}^n (\varepsilon_j+a_j) f_j=\sum_{j=1}^n 
\varepsilon_jf_j(\underline{x})+\underline{a}\cdot f=\widetilde{f}+\underline{a}\cdot f.
\end{equation*} \]
Hence, using (4),
\[ \begin{align*}
    D\widetilde{f} &=\sum_{i,j=1}^n e_i \varepsilon_j\partial_{x_i}f_j
    =\sum_{i<j} e_i \varepsilon_j\partial_{x_i}f_j+\sum_{j=1}^n e_j \varepsilon_j\partial_{x_j}f_j+\sum_{i>j} e_i \varepsilon_j\partial_{x_i}f_j =\\
    &=\sum_{i<j} e_i \varepsilon_j\partial_{x_i}f_j+\sum_{i<j} e_j \varepsilon_i\partial_{x_j}f_i-\sum_{j=1}^n \partial_{x_j}f_j
    =\sum_{i<j} e_i \varepsilon_j\partial_{x_i}f_j-\sum_{i<j} e_i \varepsilon_j\partial_{x_j}f_i-D\cdot f
    =\sum_{i<j} e_i \varepsilon_j(\partial_{x_i}f_j-\partial_{x_j}f_i)-D\cdot f =\\
    &=\sum_{i<j} e_i (e_j-a_j)(\partial_{x_i}f_j-\partial_{x_j}f_i)-D\cdot f
    =\sum_{i<j} e_i e_j(\partial_{x_i}f_j-\partial_{x_j}f_i)-\sum_{i<j} e_i a_j(\partial_{x_i}f_j-\partial_{x_j}f_i)-D\cdot f.
\end{align*} \]
On the other hand,
\[ \begin{equation*}
    D(\underline{a}\cdot f)=\sum_{i=1}^n a_iDf_i =\sum_{i,j=1}^n e_j a_i\partial_{x_j}f_i
    =\sum_{i<j} e_j a_i\partial_{x_j}f_i+\sum_{i>j}^n e_j a_i\partial_{x_j}f_i+\sum_{j=1}^n e_j a_j\partial_{x_j}f_j
    =\sum_{i<j} e_j a_i\partial_{x_j}f_i+\sum_{i<j}^n e_i a_j\partial_{x_i}f_j+\sum_{j=1}^n e_j a_j\partial_{x_j}f_j,
\end{equation*} \]
and we obtain
\[ \begin{multline*}
    Df=\sum_{i<j} e_i e_j(\partial_{x_i}f_j-\partial_{x_j}f_i)-\sum_{i<j} e_i a_j\partial_{x_i}f_j+\sum_{i<j} e_i a_j\partial_{x_j}f_i-D\cdot f
    +\sum_{i<j} e_j a_i\partial_{x_j}f_i+\sum_{i<j}^n e_i a_j\partial_{x_i}f_j+\sum_{j=1}^n e_j a_j\partial_{x_j}f_j = {}\\
    {}=\sum_{i<j} e_i e_j(\partial_{x_i}f_j-\partial_{x_j}f_i)+\sum_{i<j} e_i a_j\partial_{x_j}f_i
    +\sum_{i=1}^n e_i a_i\partial_{x_i}f_i+\sum_{i>j} e_i a_j\partial_{x_i}f_j-D\cdot f.
\end{multline*} \]
The middle sum terms are obtained in the form
\[ \begin{equation*}
    \sum_{i<j} e_i a_j\partial_{x_j}f_i
    +\sum_{i=1}^n e_i a_i\partial_{x_i}f_i+\sum_{i>j} e_i a_j\partial_{x_i}f_j
    =\sum_{i,j=1}^ne_i a_j\partial_{x_j}f_i =\Big(\sum_{j=1}^n a_j\partial_{x_j}\Big)\Big(\sum_{i=1}^ne_i f_i\Big) =(\underline{a}\cdot D)f.
\end{equation*} \]
We conclude
\[ \begin{equation*}
    Df=\sum_{i<j} e_i e_j(\partial_{x_i}f_j-\partial_{x_j}f_i)+(\underline{a}\cdot D)f-D\cdot f.
\end{equation*} \] $\square$

When $\underline{a}=\underline{0}$, the solution is a vector-valued monogenic function. Therefore, a regular vector-valued function is a monogenic function whose directional derivative in the direction $\underline{a}$ vanishes, i.e. the function is constant in this direction.

Corollary 6.1. A paravector-valued differentiable function
\[ \begin{equation*}
f(\underline{x})=f_0(\underline{x})+\underline{f}(\underline{x}),
\end{equation*} \]
where
\[ \begin{equation*}
\underline{f}(\underline{x})=\sum_{j=1}^n e_j f_j(\underline{x}),
\end{equation*} \]
is regular if and only if
\[\begin{align*}
    &\partial_{x_i}f_j=\partial_{x_j}f_i, \text{  for  } i, j=1, \dots , n, \\
    &(\underline{a}\cdot D)\underline{f}+Df_0=0,\\
    &D\cdot \underline{f}=0.
\end{align*} \]

Thus, a regular paravector-valued function $f=f_0+\underline{f}$ is a monogenic vector-valued function $\underline{f}$ whose directed derivative in the direction $\underline{a}$ is $-Df_0$. 

7. Cauchy's integral formula

In some situations, the generalized theory and the Clifford analysis are exactly the same in form and proof. One such example will be presented next. It is assumed that the reader knows the structure of the proof of Cauchy's formula in the Clifford analysis case (see e.g. [7, 8]). We calculate the Cauchy kernel as usual.

Proposition 7.1 (Cauchy kernel). The Cauchy kernel is of the form
\[ \begin{equation*}
    E(\underline{x})=-\frac{1}{\omega_{n-1}}\frac{\underline{x}^{-1}}{|\underline{x}|^{n-2}}
\end{equation*} \]
and it is left and right regular for $\underline{x}\neq 0$. In the kernel, $\omega_{n-1}$ is the surface area of the unit sphere in $\mathbb{R}^n$.

Proof. We start from the Newton potential
\[ \begin{equation*}
N(\underline{x})=\frac{1}{(2-n)\omega_{n-1} |\underline{x}|^{n-2}},
\end{equation*} \]
which defines the fundamental solution for the Laplace equation, that is, $\Delta N=\delta$. We calculate
\[ \begin{equation*}
\partial_{x_j}N(\underline{x})=-\frac{1}{\omega_{n-1}}\frac{x_j}{|\underline{x}|^n}
\end{equation*} \]
and 
\[ \begin{equation*}
DN(\underline{x})=-\frac{1}{\omega_{n-1}}\frac{\underline{x}}{|\underline{x}|^n}.
\end{equation*} \]
We define the Cauchy kernel by
\[ \begin{equation*}
    E(\underline{x})=-\widetilde{D}N(\underline{x}) =-(D- \underline{a}\cdot D)N(\underline{x})
    =-DN(\underline{x})+ \underline{a}\cdot DN(\underline{x}) =\frac{1}{\omega_{n-1}}\frac{\underline{x}}{|\underline{x}|^n}-\frac{1}{\omega_{n-1}}\frac{\underline{a}\cdot\underline{x}}{|\underline{x}|^n}.
\end{equation*} \]
Since
\[ \begin{equation*}
    \underline{x}-\underline{a}\cdot\underline{x}=\widetilde{\underline{x}},
\end{equation*} \]
we have
\[ \begin{equation*}
    E(\underline{x})
    =\frac{1}{\omega_{n-1}}\frac{\widetilde{\underline{x}}}{|\underline{x}|^n}=-\frac{1}{\omega_{n-1}}\frac{\underline{x}^{-1}}{|\underline{x}|^{n-2}}.
\end{equation*} \] $\square$

Although the Cauchy kernel looks formally the same as in the classical case, it is nevertheless of paravector valued.

The proof for the Clifford-Stokes formula is identical:
\[ \begin{equation*}
\int_{\partial\Omega}f d\sigma g=\int_\Omega((fD)g+f(Dg))dV,
\end{equation*} \]
since in
\[ \begin{equation*}
d(f d\sigma g)=((fD)g+f(Dg))dV
\end{equation*} \]
we use only the product rule of the exterior derivative $d$. 

In the proof of the Cauchy formula, it is important to evaluate the integral 
\[ \begin{equation*}
\int_{\partial B_r(\underline{x})}E(\underline{y}-\underline{x})n(\underline{y})f(\underline{y})dS(\underline{y}),
\end{equation*} \]
where $B_r(\underline{x})$ is the $r$-ball centered at $\underline{x}$ and $n(\underline{y})$ the outward pointing unit vector on the boundary. The unit normal is as usual
\[ \begin{equation*}
n(\underline{y})=\frac{\underline{y}-\underline{x}}{r}
\end{equation*} \]
and hence
\[ \begin{equation*}
   \int_{\partial B_r(\underline{x})}E(\underline{y}-\underline{x})n(\underline{y})f(\underline{y})dS(\underline{y})=\frac{1}{\omega_{n-1}}\int_{\partial B_r(\underline{x})}\frac{(\underline{y}-\underline{x})^{-1}}{|\underline{y}-\underline{x}|^{n-2}}\frac{\underline{y}-\underline{x}}{r}f(\underline{y})dS(\underline{y})
   =\frac{1}{\omega_{n-1}r^{n-1}}\int_{\partial B_r(\underline{x})}f(\underline{y})dS(\underline{y})\to f(\underline{x})
\end{equation*} \]
when $r\to 0$.
So, this part of the proof is exactly the same as in the classical case.

Theorem 7.1 (Cauchy integral formula). Let $\Omega\subset\mathbb{R}^{n}$ be an open set with a smooth boundary, let $f:U\to\mathcal{C}_n(\underline{a})$ be a regular function, and $\bar{\Omega}\subset U$. Then
\[ \begin{equation*}
f(\underline{x})=\int_{\partial \Omega} E(\underline{y}-\underline{x})d\sigma(\underline{y})f(\underline{y})
\end{equation*} \]
for any $\underline{x}\in\Omega$.

We conclude that in above the only difference is the interpretation of the Cauchy kernel and the proof itself is identical. A more detailed treatment of this issue is naturally unnecessary.

8. Regular functions generated by the Cauchy kernel

Let us use the classic multi-index notation, i.e. let $\alpha=(\alpha_1, \dots ,\alpha_n)$, $\alpha_j\in\mathbb{N}\cup\{0\}$ for all $j=1, \dots ,n$, $|\alpha|:=\alpha_1+\cdots+\alpha_n$, $\alpha!:=\alpha_1!\cdots \alpha_n!$, $\underline{x}^\alpha:=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$ and $\partial^\alpha_{\underline{x}}:=\partial^{\alpha_1}_{x_1}\cdots \partial^{\alpha_n}_{x_n}$. We define paravector valued regular functions
\[ \begin{equation*}
U_\alpha(\underline{x})=\partial^\alpha_{\underline{x}}\frac{\widetilde{x}}{|\underline{x}|^{n}}.
\end{equation*} \]
Indeed, if $U_\alpha=U_{0}^{(\alpha)}+\underline{U}^{(\alpha)}$, we have
\[ \begin{equation*}
    U_{0}^{(\alpha)}(\underline{x})=-\partial^\alpha_{\underline{x}}\frac{\underline{a}\cdot\underline{y}}{|\underline{x}|^n},\quad 
    \underline{U}^{(\alpha)}(\underline{x}) =\partial^\alpha_{\underline{x}}\frac{\underline{x}}{|\underline{x}|^n}.
\end{equation*} \]

Remark 8.1. These functions are useful, when we want to find Taylor series, using 
\[ \begin{equation*}
\frac{1}{|\underline{y}-\underline{x}|^{n-2}}=\sum_{k=0}^\infty \frac{(-1)^k}{k!}(\underline{x}\cdot D_{\underline{y}})^k\frac{1}{|\underline{y}|^{n-2}},
\end{equation*} \]
see e.g. [9, p. 34], and the Cauchy formula in the above

The multi-index Leibniz rule is
\[ \begin{equation*}
\partial_{\underline{x}}^\alpha(fg)=\sum_{\beta\leqslant \alpha} {\alpha\choose\beta}(\partial_{\underline{x}}^\beta f)(\partial_{\underline{x}}^{\alpha-\beta}g).
\end{equation*} \]
Since
\[ \begin{equation*}
\partial_{\underline{x}}^\beta(\underline{x}-\underline{a}\cdot\underline{x})=0
\end{equation*} \]
for $|\beta|\geqslant 2$, we obtain
\[ \begin{multline*}
U_\alpha(\underline{x}) =\partial^\alpha_{\underline{x}}\frac{\underline{x}-\underline{a}\cdot\underline{x}}{|\underline{x}|^n} =\sum_{\beta\leqslant \alpha} {\alpha\choose\beta}\partial_{\underline{x}}^\beta (\underline{x}-\underline{a}\cdot\underline{x})\partial_{\underline{x}}^{\alpha-\beta}\frac{1}{|\underline{x}|^n}
= (\underline{x}-\underline{a}\cdot\underline{x})\partial_{\underline{x}}^{\alpha}\frac{1}{|\underline{x}|^n}+\sum_{j=1}^n{\alpha\choose\epsilon_j}\partial_{x_j} (\underline{y}-\underline{a}\cdot\underline{x})\partial_{\underline{x}}^{\alpha-\epsilon_j}\frac{1}{|\underline{x}|^n} =\\
= (\underline{x}-\underline{a}\cdot\underline{x})\partial_{\underline{x}}^{\alpha}\frac{1}{|\underline{x}|^n}+\sum_{j=1}^n{\alpha\choose\epsilon_j} (e_j-a_j)\partial_{\underline{x}}^{\alpha-\epsilon_j}\frac{1}{|\underline{x}|^n}
\end{multline*} \]
where $\epsilon_j=(0, \dots ,1, \dots ,0)$ is a unit multi-index. We define polynomials $p_\alpha$ by
\[ \begin{equation*}
\partial_{\underline{x}}^{\alpha}\frac{1}{|\underline{x}|^n}=\frac{p_\alpha(\underline{x})}{|\underline{x}|^{n+2|\alpha|}}.
\end{equation*} \]
Hence, the regular functions are of the form
\[ \begin{equation*}
U_{\alpha}(\underline{x})
=(\underline{x}-\underline{a}\cdot\underline{x})\frac{p_\alpha(\underline{x})}{|\underline{x}|^{n+2|\alpha|}}+\sum_{j=1}^n\alpha_j (e_j-a_j) \frac{p_{\alpha-\epsilon_j}(\underline{x})}{|\underline{x}|^{n+2|\alpha|-2}}.
\end{equation*} \]
Let us take a closer look at the polynomial. We have
\[ \begin{equation*}
\partial_{\underline{x}}^{\alpha+\epsilon_j}\frac{1}{|\underline{x}|^n}=\frac{p_{\alpha+\epsilon_j}(\underline{x})}{|\underline{x}|^{n+2|\alpha+\epsilon_j|}}.
\end{equation*} \]
and 
\[ \begin{equation*}
  \partial_{\underline{x}}^{\alpha+\epsilon_j}\frac{1}{|\underline{x}|^n}=\partial_{x_j}\frac{p_\alpha(\underline{x})}{|\underline{x}|^{n+2|\alpha|}} =-(n+2|\alpha|)x_j\frac{p_\alpha(\underline{x})}{|\underline{x}|^{n+2|\alpha|+2}}+\frac{\partial_{x_j}p_\alpha(\underline{x})}{|\underline{x}|^{n+2|\alpha|}}
  =\frac{-(n+2|\alpha|)x_jp_\alpha(\underline{x})+|\underline{x}|^2\partial_{x_j}p_\alpha(\underline{x})}{|\underline{x}|^{n+2|\alpha+\epsilon_j|}},  
\end{equation*} \] 
and by comparing these, we get the differential-recurrence relations
\[ \begin{equation*}
p_{\alpha+\epsilon_j}(\underline{x})=-(n+2|\alpha|)x_jp_\alpha(\underline{x})+|\underline{x}|^2\partial_{x_j}p_\alpha(\underline{x})
\end{equation*} \] 
and $p_{0}(\underline{x})=1$. Multiplying the recursion both sides by $|\underline{x}|^{-2|\alpha|-n-2}$, we get
\[ \begin{equation*}
|\underline{x}|^{-2|\alpha|-n-2}p_{\alpha+\epsilon_j}(\underline{x})=-(n+2|\alpha|)y_j|\underline{x}|^{-2|\alpha|-n-2}p_\alpha(\underline{x})+|\underline{x}|^{-2|\alpha|-n}\partial_{x_j}p_\alpha(\underline{x}),
\end{equation*} \] 
that is,
\[ \begin{equation*}
|\underline{x}|^{-2|\alpha|-n-2}p_{\alpha+\epsilon_j}(\underline{x})=\partial_{x_j}\big(|\underline{x}|^{-2|\alpha|-n}p_\alpha(\underline{x})\big)
\end{equation*} \] 
or
\[ \begin{equation*}
p_{\alpha+\epsilon_j}(\underline{x})=|\underline{x}|^{2|\alpha|+n+2}\partial_{x_j}\big(|\underline{x}|^{-2|\alpha|-n}p_\alpha(\underline{x})\big). 
\end{equation*} \] 
Let us consider linear operators
\[ \begin{equation*}
L_jf(\underline{x})=|\underline{x}|^{2|\alpha|+n+2}\partial_{x_j}\big(|\underline{x}|^{-2|\alpha|-n}f(\underline{x})\big), 
\end{equation*} \] 
satisfying
\[ \begin{equation*}
L_j(1)=-(2|\alpha|+n)x_j
\end{equation*} \] 
and
\[ \begin{equation*}
L_j(x_j^m)=x_j^{m-1}\big(m|\underline{x}|^2-(2|\alpha|+n)x_j^2\big)
=mx_1^2x_j^{m-1}+\cdots+(m-2|\alpha|-n)x_j^{m+1}+\cdots+mx_n^2x_j^{m-1}.   
\end{equation*} \] 
Similarly,
\[ \begin{multline*}
    L_j(\underline{x}^\alpha)=x^{\alpha_1}\cdots x^{\alpha_{j-1}} L_j(x_j^{\alpha_j})x^{\alpha_{j+1}}\cdots x^{\alpha_{n}}  =
    x^{\alpha_1}\cdots x^{\alpha_{j-1}}(\alpha_jx_1^2x_j^{\alpha_j-1}+\cdots
    +(\alpha_j-2|\alpha|-n)x_j^{\alpha_j+1}+\cdots+\alpha_jx_n^2x_j^{\alpha_j-1})x^{\alpha_{j+1}}\cdots x^{\alpha_{n}}= {}\\
{}    =\alpha_j\underline{x}^{\alpha+2\epsilon_1-\epsilon_j}+\cdots+\alpha_j\underline{x}^{\alpha+2\epsilon_{j-1}-\epsilon_j}+(\alpha_j-2|\alpha|-n)\underline{x}^{\alpha+\epsilon_{j}} +\alpha_j\underline{x}^{\alpha+2\epsilon_{j+1}-\epsilon_j}+\cdots+\alpha_j\underline{x}^{\alpha+2\epsilon_{n}-\epsilon_j}
\end{multline*} \]
and
\[ \begin{equation*}
p_{\alpha+\epsilon_j}(\underline{x})=L_j\big(p_\alpha(\underline{x})\big).
\end{equation*} \] 

Conclusion

This paper considers analysis with generalized Clifford algebras. The central point of the analysis is the effect of the direction vector $\underline{a}$, which determines the input on the theory. Most of the results of the classical Clifford analysis can be converted almost as is to the generalized case. The biggest differences come in situations where powers of a vector variable are needed. The effect of the vector $\underline{a}$ on the class of regular functions still needs to be examined further.

Competing interests. I declare that I have no competing interests.
Author’s Responsibilities. I take full responsibility for submitting the final manuscript in print. I approved the final version of the manuscript.
Acknowledgments. The author is grateful to his family, whose understanding and patience has helped in the writing this article.

×

About the authors

Heikki Orelma

Tampere University

Author for correspondence.
Email: Heikki.Orelma@tuni.fi
ORCID iD: 0000-0002-8251-4333

D.Sc. (Tech.), Adjunct Professor; Researcher; Dept of Mechanics and Mathematics

Finland, 33100, Tampere, Kalevantie 4

References

  1. Yaglom I. M. Complex Numbers and Their Application in Geometry. Moscow, Fizmatgiz, 1963, 192 pp. (In Russian)
  2. Kanzaki T. On the quadratic extensions and the extended Witt ring of a commutative ring, Nagoya Math. J., 1973, vol. 49, pp. 127–141. DOI: https://doi.org/10.1017/S0027763000015348.
  3. Helmstetter J., Micali A., Revoy P. Generalized quadratic modules, Afr. Mat., 2012, vol. 23, no. 1, pp. 53–84. DOI: https://doi.org/10.1007/s13370-011-0018-x.
  4. Tutschke W., Vanegas C. J. Clifford algebras depending on parameters and their applications to partial differential equations, In: Some Topics on Value Distribution and Differentiability in Complex and p-Adic Analysis, Mathematics Monograph Series, 11; eds. A. Escassut, W. Tutschke, C. C. Yang. Beijing, Science Press, 2008, pp. 430–450.
  5. Bourbaki N. Éléments de mathématique. Algèbre. Chapitre 9. Berlin, Springer, 2007, 211 pp.
  6. Chevalley C. Collected Works, vol. 2, The algebraic theory of spinors and Clifford algebras, eds. P. Cartier, C. Chevalley. Berlin, Springer, 1997, xiv+214 pp.
  7. Delanghe R., Sommen F., Souček V. Clifford Algebra and Spinor-Valued Functions. A Function Theory for the Dirac Operator, Mathematics and its Applications, vol. 53. Dordrecht, Kluwer Academic Publ., 1992, xvii+485 pp.
  8. Gürlebeck K., Habetha K., Sprößig W. Funktionentheorie in der Ebene und im Raum, Grundstudium Mathematik. Basel, Birkhäuser, 2006, xiii+406 pp.
  9. Müller C. Properties of the legendre functions, In: Spherical Harmonics, Lecture Notes in Mathematics, 17. Berlin, Springer, 1966, pp. 29–37. DOI: https://doi.org/10.1007/BFb0094786.

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2023 Authors; Samara State Technical University (Compilation, Design, and Layout)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies