Sunday, 5 February 2017

Synthesis of port-driven impedances, Part I: Positive-realness

Introduction: my personal experience with the synthesis problem


During my Ph.D. I started to study some old books of Circuit Theory by authors such as Guillemin, Newcomb, Belevitch and Anderson. I was truly amused since, surprisingly for an Electrical Engineer, many of the principles I found in those texts were completely new for me. Moreover, the books were written in such an extraordinary compelling way that truly became a source of inspiration during my days in Southampton.

This time I will elaborate on a particular topic of Circuit Theory that challenged me in many ways: the Synthesis of port-driven impedances. In fact I must admit that despite of my best attempts, I was not able to make a significant contribution to the topic, so I had to move on and work on some other research directions. Although of course I am very far from considering myself as a brilliant student, there is some sort of justification in my failed attempts, namely, many issues in the synthesis of port-driven impedances have remained as an open problem since the 1930s. This situation naturally means that solving such issues is not trivial at all, considering that many minds, sharper than mine, have obtained moderate achievements after a significant effort. In order to understand the challenges, let us first recall what the problem is about - we can describe it in the following way:

 "Given a transfer function $Z(s):=N(s)D(s)^{-1}$ that satisfies certain physical constraints, obtain a port-driven electrical circuit consisting of inductors, capacitors and resistors; whose impedance function coincides with the given transfer function."

First off, I am aware that the term "certain physical constrains" is rather vague, but I wrote it on purpose in order to devote this post to the explanation of such constraints. Once such physical restrictions are properly introduced unambiguously, i.e. in mathematical terms, we will obtain a test that will permit to check if a given transfer function qualifies as an impedance. Later on, we have to find the corresponding electrical circuit that satisfies such impedance specification (though this will be a matter of a second part in a new post).

Developing a mathematical condition for passive circuits


Some transfer functions violate the laws of nature and consequently cannot be realized as an electrical circuit. In order to prove this statement it is enough to find a counterexample. For instance, consider the impulse response of a transfer function $\frac{1}{s-1}$,  the output has a trajectory $y(t)=e^{t}+y(0)$. However, since no energy is coming into the circuit for $t>0$, there is simply no way to generate the corresponding electrical/magnetic field that would lead to such unbounded output.

The example above may seem exaggerated, however we can still use it to make a simple but crucial observation about the physics of port-driven electrical circuits, constructed by inductors, capacitors and resistors; namely, such circuits do not generate energy by themselves, i.e. they are passive.

We will now introduce a time-domain condition that is equivalent to such passivity property. Recall from the typical convention of power flow, positive power with respect to external sources corresponds to the power that is oriented into an electrical circuit which can be either use to store energy in inductors/capacitors or dissipated as heat by resistors, while negative power refers to the power that flows from the electrical circuit to the external source. In Electrical Engineering the power that is dissipated by resistors is called active power.

Let us consider for example the case of (co)sinusoidal one-port voltages and currents defined as $v(t)=V\cos(\omega t+\phi_v)$ and $i(t):=I\cos(\omega t+\phi_i)$. The variables $v(t)$ and $i(t)$ are conjugate, which means that their product has dimension of power, i.e.

$p(t):=v(t) i(t)=V I \cos(\omega t+\phi_v) \cos(\omega t+\phi_i)$ .

After some straightforward mathematical manipulations, we obtain

$p(t):=\frac{1}{2}VI \cos(\phi_v-\phi_i) +\frac{1}{2}VI \cos(2\omega t+\phi_v+\phi_i)$ .

From this example of trajectories we can easily notice that the power oriented into an electrical circuit can be positive or negative at certain instants of time. This is not surprising since it is well-known that while resistors (instantaneously) dissipate power, inductors and capacitors have only the property to store and release energy with a certain time delay. It follows that such effect of releasing energy that was stored in the past, might result in a close imitation to the "generation" of energy over a limited interval of time. Consequently, since we are determined to find a condition that captures the passivity of an electrical circuit, notice that imposing the condition $p(t)\ge 0$ would be rather conservative, precisely because passive circuits may also momentaneously "return" power to their external source due to capacitors and inductors.

On the other hand, since we know that such negative power corresponds to a temporary effect and moreover, it comes from the external source in a first place, it is thus enough to consider the total average power that has been oriented into the circuit, i.e.

$P:=\int_{-\infty}^{+\infty} p(t) dt \ge 0$.         (1)

Of course we are in a sort of problem here, since we must assume that this integral is solvable, which requires compact support trajectories. This issue can be easily solved if we assume that any circuit under consideration starts at rest and ends at rest.

Positive-realness


There exist conditions equivalent to equation (1), as summarized in the following proposition.

Proposition 1. Consider an n-port driven circuit with vector of port voltages $V$ and their corresponding conjugate vector of port currents $I$. Consider an input-output representation with input $I$ and output $V$ defined as $Z(s)=N(s)D(s)^{-1}$. The following statements are equivalent.
  1. The n-port driven circuit is passive.
  2. $\int_{-\infty}^{+\infty} V^\top I dt \ge 0$ for all $V$ and $I$ of compact support that satisfy the laws of the circuit.
  3. $\Re\{Z(j\omega)\}\ge0$ for all $\omega\in\mathbb{R}$.
  4. $N(-j\omega)^\top D(j\omega)+D(-j\omega)^\top N(j\omega)$ for all $\omega\in\mathbb{R}$.

We have already elaborated on the equivalence between statements 1 and 2. So in order to prove the rest of the proposition we can e.g. notice that the integral in statement 2 corresponds to the active power of the circuit, the power that is dissipated by resistors due to the Joule effect corresponding with a quadratic function. Consequently, by applying Parseval's theorem we can find the equivalence with the frequency domain inequalities in statements 3 and 4. The readers that are particularly interested in the technical details may want to have a look at Prop. 5.2 in

Willems, J. C., & Trentelman, H. L. (1998). On quadratic differential forms. SIAM Journal on Control and Optimization, 36(5), 1703-1749.

Statement 4 is officially called positive-real condition. Moreover, note that statement 3 corresponds to the well-known definition of the real component of an impedance function evaluated at $s=j\omega$, i.e. the resistance. By imposing it to be positive, we are restricting the type of impedance function that can be possibly found in real-life, since a negative resistance would violate the second law of thermodynamics -implying that it can reverse the process of dissipation. It is true that in real-life we can find some circuits that can, for practical purposes, be regarded as negative resistors (e.g. the local approximation of a constant power load); however, such circuits are thus non-passive and need some "active component" or power source that permit them to behave in such way.

Sinusoidal case: phasor analysis


Curiously we, Electrical Engineers, are very much used to use a special case of passivity, though we particularly focus on:
  1. sinusoidal trajectories for currents and voltages;
  2. the frequency domain (phasor analysis);
  3. a very particular angular frequency $\overline{\omega}:=2\pi\overline{f}$, where $\overline{f}$ is a nominal frequency, e.g the utility frequency;
  4. a steady state operation, a particular solution of the linear differential equations that implies sinusoidal trajectories of constant phase, frequency and magnitude;
  5. (not always but in many undergraduate courses, transient analysis and in the study of radial lines,) the 1-port (scalar) case of an impedance.
Let us try to make sense about my humble opinion about the points 1-5 and the previous analysis.

In the case of (co)sinusoidal trajectories we can consider the passivity condition by integrating over a fixed period (this is called cyclo-passivity), i.e.

$\frac{1}{2\pi} \int_{-\infty}^{+\infty} p(t) dt = \frac{1}{2}VI \cos(\phi_v-\phi_i) \ge 0$.

By Euler's identity, the right hand side of the equation can be computed by taking the real part of

$VI e^{j\phi_v}e^{-j\phi_i}= \underbrace{VI \cos(\phi_v-\phi_i)}_{\ge 0}+jVI \sin(\phi_v-\phi_i)$.               

  Following some algebraic manipulations, it can be proved that real and imaginary parts can be also separated as

$VI e^{j\phi_v}e^{-j\phi_i}= \underbrace{\frac{1}{2}VI e^{j\phi_v}e^{-j\phi_i}+ \frac{1}{2}VI e^{-j\phi_v}e^{j\phi_i}}_{\mbox{Real part}} + \underbrace{\frac{1}{2}VI e^{j\phi_v}e^{j\phi_i}-\frac{1}{2}VI e^{-j\phi_v}e^{j\phi_i}}_{\mbox{Imaginary part}}\ge 0$.         (4)

Moreover, note that this expression actually involves the Fourier transform of port voltages and currents, evaluated at a very particular frequency $\omega:=\overline{\omega}$. Therefore for a given impedance $Z(j\omega):=\frac{V(j\omega)}{I(j\omega)}=\frac{n(j\omega)}{d(\omega)}$, we can take the real part of equation (4) and impose the condition:

$\frac{1}{2}n(-j\omega)d(j\omega)+\frac{1}{2}n(-j\omega)d(j\omega)\ge 0$ with $\omega=\overline{\omega}$.

This corresponds to condition 4 in Proposition 1, evaluated at a very particular frequency $\overline{\omega}$. Notice that the real and imaginary parts of (4) in fact correspond to the classical definitions of active- and reactive- power respectively. Consequently, now it is more evident that imposing the active power to be positive, is totally equivalent to impose the net power flowing into the circuit to be positive as well, which corresponds almost obviously to condition (1).

A final remark:

It is true that we always pursue the application of practical knowledge, but the rigor in the mathematics of the old circuit theory is certainly beneficial to open our eyes to the whole picture regarding our applications, considering that there are plenty of relevant issues that are still open problems. Here I am not only talking about the synthesis problem that will be introduced in a second part post, but also other things such as the characterization of reactive power for non-sinusoidal trajectories, which is still a striking problem and a hot debate among Electrical Engineers nowadays.



Tuesday, 7 April 2015

Few remarks about behaviors


A brief introduction about myself; I am an electrical engineer with a background in power electronics (topic that I will elaborate later). Due to a fortunate decision I ended up working on a Ph.D. thesis that relies on behavioral system theory. Although in this blog I am mostly interested in discussing engineering applications, I will do my best to maintain the rigorous mathematical exposition and principles of the behavioral setting.

The justification for the development of concepts in behavioral system theory has been extensively argued before in books, articles, magazines, etc., and certainly, in a better way than I could possibly attempt to elaborate here. Hence, what is written here must be considered only as my humble and personal point of view.

* My opinion and ideas are of course open to debate and in fact I will be quite happy to hear about any critique or rebuke to my arguments, so please feel free to drop a comment or contact me with your remarks.

About behaviors


The main idea in the behavioral setting (see my previous post) is to focus on the study of dynamics at the level of trajectories rather than representations. Then the behavior of the system enters into the picture not only as a figure of speech, but as a mathematical object, e.g. for a set of linear differential equations

$R\left(\frac{d}{dt}\right)w=0$,

with  $R\in\mathbb{R}^{\bullet \times \tt w}[s]$, we can define the behavior $\mathfrak{B}:=\ker~R\left(\frac{d}{dt}\right)$.

It may seem a bit confusing at this point to talk about a representation $R\left(\frac{d}{dt}\right)w=0$ as a sort of starting point, but let us remember that so far we have discussed systems whose physical laws are described by a set of linear differential equations. Taking this point into account, let us then make the following important remarks.

First off note that a kernel representation is very general: it admits zeroth order equations as well as higher-order ones. Moreover, many other representations can adopt such structure in a straightforward manner. Consider for instance the traditional state space representation:

$\frac{d}{dt}x=Ax+Bu$.

We can define $w:=\mbox{col}(x,u)$ and $R(s):=\begin{bmatrix} sI-A & -B \end{bmatrix}$.

However, it must be clear that we are not forced to use a kernel representation to define a behavior. For instance in the last example the behavior can be simply defined as

$\mathfrak{B}:=\left\{ \mbox{col}(x,u)\in\mathfrak{C}^{\infty}(\mathbb{R},\mathbb{R}^{\bullet})~\mid~ \frac{d}{dt}x=Ax+Bu \right\}$.

There are plenty more representations that can be used to describe the laws of physical systems, for instance the impedance of an n-port driven circuit is modeled as a matrix of rational functions, i.e. $Z(s):=P(s)^{-1}Q(s)$, with $P,Q\in\mathbb{R}^{n\times n}[s]$. In the time domain such an impedance corresponds to the input-output representation $Q\left(\frac{d}{dt}\right)I=P\left(\frac{d}{dt}\right) V$ where $I,V$ are the port- currents and voltages of the circuit. Then

$\mathfrak{B}:=\left\{\mbox{col}(I,V)\in\mathfrak{C}^{\infty}(\mathbb{R},\mathbb{R}^{2n})~\mid~ Q\left(\frac{d}{dt}\right)I=P\left(\frac{d}{dt}\right) V  \right\}$.

As a preliminary conclusion, we can say that the behavior can be defined on the basis of the type of models that is most natural for each application. Moreover, there is no compelling reason to force the use of a particular representation (e.g. input-output descriptions, state-space) if we can study the overall properties of the system directly in terms of trajectories.

There are other sensible reasons why we should consider trajectories as the central object of study rather than representations. For example, the physical laws of a system may be satisfied by different representations as we shall discuss now.

Equivalence of representations

"Appearances can be deceiving."

Let us consider two behaviors: $\mathfrak{B}_1:=\ker~R_1\left(\frac{d}{dt}\right)$ and $\mathfrak{B}_2:=\ker~R_2\left(\frac{d}{dt}\right)$, where $R_1,R_2\in\mathbb{R}^{q\times\tt w}[s]$ correspond to two different kernel representations. We are interested in knowing under which circumstances $\mathfrak{B}_1=\mathfrak{B}_2$. 

Consider $V\in\mathbb{R}^{q\times q}[s]$. Define $R_1(s):=V(s)R_2(s)$, then it is easy to see that all the trajectories in the kernel of $R_2\left(\frac{d}{dt}\right)$ are also trajectories in the kernel of $VR_2\left(\frac{d}{dt}\right)$. Then we conclude that $\mathfrak{B}_2\subseteq\mathfrak{B}_1$.

Now note that if $V$ is unimodular, i.e. $V^{-1}\in\mathbb{R}^{q\times q}[s]$, following the same argument as above, it follows that $\mathfrak{B}_1\subseteq\mathfrak{B}_2$, and consequently $\mathfrak{B}_1=\mathfrak{B}_2$.

We conclude that a kernel representation $R\left(\frac{d}{dt}\right)w=0$ is equivalent to $VR\left(\frac{d}{dt}\right)w=0$ when $V$ is unimodular.

* For an electrical engineer like me this result is striking. Personally, I was accustomed to associate the laws of a given system with a particular set of equations derived from physical principles. However, any set of equations that can be written down to describe the laws of the system, is only one of the many mathematical models that can be used. In other words, the result recalled here suggests that trajectories are indeed something more fundamental than representations.

For further elaboration please refer to:

[1] Polderman, J. W., Willems, J. C., Introduction to mathematical systems theory: a behavioral approach, Springer, 1998.

Thursday, 2 April 2015

Behavioral system theory

In this post I introduce the mathematical language that will be frequently used to develop further concepts and topics in this blog.

Here I basically adopt the notation, ideas and principles of behavioral system theory. I include some references to complete material (including proofs) regarding this theory at the end of this post.


Notation

$\mathbb{R}^{n}$ denotes the space of $n$ dimensional real vectors.

$\mathbb{R}^{m\times n}$ denotes the space of $m\times n$ real matrices.

$\mathbb{R}^{\bullet\times m}$ denotes the space of real matrices with $m$ columns and an unspecified finite number of rows. 

Given matrices $A,B\in\mathbb{R}^{\bullet\times m}$, $\mbox{col}(A,B)$ denotes the matrix obtained by stacking $A$ over $B$.

$\mathbb{R}[s]$ denotes the ring of polynomials with real coefficients in the indeterminate $s$.

$\mathbb{R}^{m\times n}(s)$ denotes the set of rational $m\times n$ matrices.

$\mathfrak{C}^{\infty}(\mathbb{R},\mathbb{R}^{\tt w})$ denotes the set of infinitely differentiable functions from $\mathbb{R}$ to $\mathbb{R}^{{\tt w}}$.


Linear differential behaviors



Consider a linear time-invariant dynamical system whose physical laws are described by the following set of linear differential equations:

$R_0 w + R_1 \frac{d}{dt} w+ ... + R_L \frac{d^L}{dt^L} w =0$.

with $R_i\in\mathbb{R}^{\bullet\times \tt w}$, $i=0,...,L$, and $w=\mbox{col}(w_1,...,w_{\tt w})$ is the vector of external variables.

Such equations can be expressed in a compact way as 

$R\left(\frac{d}{dt}\right) w =0$,

where $R\in\mathbb{R}^{\bullet\times \tt w}[s]$. Therefore, $R(s)=R_0 + R_1 s + \cdots + R_L s^L$, represents the set of differential equations. 

If we adopt $\mathfrak{C}^{\infty}$ as solution space, we can define a linear differential behavior as

$\mathfrak{B}:=\left\{ w\in \mathfrak{C}^{\infty}(\mathbb{R},\mathbb{R}^{\tt w}) ~\mid~ R\left(\frac{d}{dt}\right) w =0 \right\}$.

In other words, the behavior is defined as the set of trajectories in the kernel of  $R$. For simplicity, we often refer to this set of trajectories as $\ker~R\left(\frac{d}{dt}\right)$. 

The set of linear differential behaviors taking their values in the signal space $\mathbb{R}^{\tt w}$ is denoted by $\mathfrak{L}^{\tt w}$.

  Controllability and Observability


Definition 1. A behavior $\mathfrak{B}\in \mathfrak{L}^{\tt w}$ is controllable if for all $w_1, w_2 \in \mathfrak{B}$ there exists a $t' \ge 0 $ and $w \in \mathfrak{B}$, such that $w(t)=w_1(t)$ for $t < 0$ and $w(t + t')=w_2(t)$ for $t \ge 0$.

Controllability can be characterized algebraically as follows.

Proposition 1. Let $\mathfrak{B}:=\ker~R\left( \frac{d}{dt} \right)$, with $R\in\mathbb{R}^{\bullet \times \tt w}[s]$. $\mathfrak{B}$ is controllable iff $R(\lambda)$ is full row rank for all $\lambda \in \mathbb{C}$. 

Controllable behaviors admit a special representation called image representation.

Proposition 2. Let $\mathfrak{B}\in \mathfrak{L}^{\tt w}$. There exists $\tt z\in\mathbb{N}$ and $M \in  \mathbb{R} ^{\tt w \times z }[s]$ such that $\mathfrak{B}=\left\{ w ~\mid ~ \exists \,z \in\mathfrak{C}^{\infty}(\mathbb{R},\mathbb{R}^{\tt z})\, \, s.t. \, \, w=M \left( \frac{d}{dt} \right) z \right\}$, iff $\mathfrak{B}$ is controllable.

A behavior described by an image representation $w=M \left( \frac{d}{dt} \right) z$ as in Prop. 2, is  denoted by $\mbox{im}~M \left( \frac{d}{dt} \right)$. The auxiliary variable $z$ is called latent variable whose solution space is $\mathfrak{C}^{\infty}(\mathbb{R},\mathbb{R^{\tt z}})$.

Definition 2. Let $\mathfrak{B}\in\mathfrak{L}^{\tt w}$. Partition the external variable as $w=\mbox{col}(w_1,w_2)$. The variable $w_2$ is observable from $w_1$ if for all $\mbox{col}(w_1,w_2),\mbox{col}(w_1,w_2')\in\mathfrak{B}$, it follows that $w_2=w_2'$.

Proposition 3. Let $\mathfrak{B}\in\mathfrak{L}^{\tt w}$ be a controllable behavior described by $w=M \left( \frac{d}{dt} \right) z$. The latent variable $z$ is observable from $w$ iff $M(\lambda)$ is of full column rank for all $\lambda \in \mathbb{C}$.

References:


[1] Polderman, J. W., Willems, J. C., Introduction to mathematical systems theory: a behavioral approach, Springer, 1998.

[2] Willems, J. C., The behavioral approach to open and interconnected systems, IEEE Control Systems, no. 6, pp. 46-99, 2007.