\documentclass[reqno]{amsart}
\usepackage{hyperref}
\usepackage{multirow}

\AtBeginDocument{{\noindent\small
\emph{Electronic Journal of Differential Equations},
Vol. 2017 (2017), No. 68, pp. 1--25.\newline
ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu}
\thanks{\copyright 2017 Texas State University.}
\vspace{7mm}}

\begin{document}
\title[\hfilneg EJDE-2017/68\hfil Nonlinear differential equations]
{Nonlinear differential equations with deviating arguments and approximations
via a Parker-Sochacki approach}

\author[V. M. Isaia \hfil EJDE-2017/68\hfilneg]
{Vincenzo M. Isaia}

\address{Vincenzo Michael Isaia \newline
Department of Mathematics,
Rose-Hulman Institute of Technology,
Terre Haute, IN 47803, USA}
\email{isaia@rose-hulman.edu}

\thanks{Submitted April 1, 2016. Published March 8, 2017.}
\subjclass[2010]{34K07, 34K40, 65L03}
\keywords{Delay differential equations; lag; PSM method; method of steps;
\hfill\break\indent  method of successive approximation; deviating argument}

\begin{abstract}
 The Parker-Sochacki method has been successful in generating approximations
 for a wide variety of ODEs, and even PDEs of evolution type, by achieving an
 autonomous polynomial vector field and implementing the Picard iteration.
 The intent of this article is to extend PSM to a large family of
 differential equations with deviating arguments.  Results will be given for
 problems with delays which are linear in time and state independent, and also
 have constant initial data and nonlinear differential equations which are retarded,
 neutral or advanced.  The goal of the proofs is to motivate a numerically
 efficient DDE solver.  In addition, an explicit a priori error estimate that does
 not require derivatives of the vector field is presented. The non-constant initial
 data cases and the state dependent delay cases are discussed formally.
\end{abstract}

\maketitle
\numberwithin{equation}{section}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{example}[theorem]{Example}
\allowdisplaybreaks


\section{Introduction}

In 1964, Fehlberg \cite{F} produced a technical report for NASA that touched upon
the benefits of auxiliary variables in solving ODEs.  This idea appeared to go
unnoticed and in 1989  Parker and  Sochacki noticed the benefit of
a polynomial environment for the Picard iteration, whose integral form both
 preserves the polynomial environment as well as computes the appropriate
Taylor polynomial for the analytic solution on its domain.

The method of Parker and Sochacki was introduced in \cite{PS} and the structure
of the class of ODEs was studied in \cite{CPSW}.
The method was extended to PDEs in \cite{PSPDE} and an explicit a priori error
estimate, which do not rely on derivatives of the
vector field in the ODE case, was given in \cite{WWSPC} , see also \cite{ST}.
The method was dubbed the PS method in \cite{SB},
here the acronym PSM is used.  This method as applied to ODEs will be reviewed
in Section \ref{sec:recap}, along with the
presentation of an array interpretation that will be applicable in the deviating
argument case and an example which converts a
non-polynomial and non-autonomous vector field to an autonomous, polynomial one.

After establishing an existence and uniqueness result via the method of successive
approximations in Section \ref{sec:eut},
the central intent of this article is to adapt PSM to certain nonlinear
differential equations with deviating arguments, referred
to as DDEs, whose initial data is polynomial.  This approach, dubbed dPSM,
is presented in Section \ref{sec:psmdde}.
In addition, two examples are presented, one that converts a DDE's vector field
to an autonomous, polynomial one and
another that highlights a special case of how information can propagate.

In the case of linear in time and state independent delays, structures for the
approximation to the solution across time can be established, and this is done in
Section \ref{sec:res}.  The purpose of highlighting these structures is so they
can be leveraged computationally.  To understand the structure across iterations,
which can also provide computational benefit, the array approach from
Section \ref{sec:recap} for PSM is developed in Section \ref{sec:FIS} for dPSM.

Two more examples are also given in Section \ref{sec:FIS}, one that
looks more carefully at converting vector fields, since this process is not unique,
along with an example that shows how `'lag' dependence appears and propagates in
the coefficients of the polynomial series approximation.  The proof of the structure
across iterations as well as explicit a priori error bounds that do not rely on
higher order derivatives of the vector field.  A formal discussion occurs in
Section \ref{sec:extra} concerning the use of dPSM for state dependent delays.

\section{PSM overview} \label{sec:recap}

Given an initial value problem or IVP, based on a scalar ODE, the introduction of
a polynomial vector field to the method of successive
approximation allows the Picard iteration to be viewed as more than just a
theoretical tool.  If the Picard iteration can be shown to
converge, establishing existence and uniqueness of a solution to the IVP, then
a polynomial vector field will preserve the polynomial
form of the initial data, which is constant, for all iterations, and a structure
arises that allows the coefficients of the polynomial to be
computed efficiently.  In addition to requiring a polynomial form, the vector
field also needs to be autonomous and the problem begun
at time $t_0 = 0$, which can be achieved when $t_0 \neq 0$ via the change
$\underline{t} = t - t_0$ since $\frac{\mathrm{d}}{\mathrm{d}t} =
\frac{\mathrm{d}}{\mathrm{d}{\underline{t}}}$.  Picard's then generates a Maclaurin polynomial,
and shifting the time variable recovers the
required Taylor polynomial of the solution.

An autonomous polynomial vector field requires the introduction of auxiliary
variables that replace non-polynomial terms with polynomial terms, where
nonlinearity and non-autonomous terms are exchanged for a larger system size.
It can be shown that the analytic functions that arise as (a component of the)
solutions to an ODE with a polynomial vector field, occupies a large portion of
the class of all analytic functions, although they are not equal, see \cite{CPSW}.
The solution of an ODE whose vector field can be transformed into a polynomial one,
is dubbed \emph{projectively polynomial}.
The vector fields under consideration in this article are those which have
projectively polynomial components.

This example takes a non-polynomial vector field, which is non-autonomous,
begins at an arbitrary starting time $t_0 \neq 0$
and converts the vector field to one suitable for \emph{PSM}, which used auxiliary
variables to achieve the proper form for the vector
field and initial time, and uses Picard's and the inherent structure to compute
coefficients efficiently.

\begin{example} \label{examp1} \rm
Suppose $u' = \cos(u) + \cos(t)$ and $u(t_0) = u_0$ with $t_0 \neq 0$.
Then introduce $V = \cos(u)$ and $T = \cos(t)$ so that $u' = V + T$ and also
$W = \sin(u)$, so that $V' = -W u' = -W (V + T)$ and $W' = Vu' = V(V + T)$.
To get $T'$, introduce $S = \sin t$ so that $T'=-S$ and $S'=T$.

For data given at $t_0 \neq 0$, Picard's iteration produces polynomials in powers
of $t$.  This does not align with the Taylor polynomial that would be in powers
of $t-t_0$.  To facilitate, introduce $\underline{t} = t - t_0$ and
$u(t) = U(\underline{t})$ implies the
method of successive approximations will now generate a Maclaurin polynomial
for each component, and the Taylor polynomial
for the original problem's solution is obtained by substituting $t-t_0$ for
$\tau$ in the first component $U$.  Then
$u' = \cos u + \cos t$, with $u(t_0) = u_0$ is equivalent to the system
\begin{gather*} %\label{ex}
x_4(t) =  \begin{cases}
U'= V + T, \quad U(0) = u_0 \\
V'=-W(V+T), \quad  V(0) = \cos(u_0) \\
W'= V(V+T), \quad W(0) = \sin(u_0) \\
T'=-S, \quad  T(0) = \cos(t_0) \\
S'= T,\quad S(0) = \sin(t_0)
\end{cases}
  \end{gather*}
which has an autonomous polynomial vector field, indeed in this case, quadratic.
 It is possible to reduce any polynomial vector field of arbitrary degree to one
that is at most degree two \cite{CPSW}.  In addition, this reference contains a
construction of an analytic function that is not projectively polynomial,
i.e., its vector field cannot be transformed as per this
example.
\end{example}

 Note that if the original ODE were higher order, then in addition,
 the standard change of variable would have been applied
to covert it to a first order system and this change produces polynomial
terms as well.  In addition, there exists an explicit a priori
error estimate that does not involve any higher order derivatives of the
 original vector field, see \cite{WWSPC}.

A subtle and relevant point is that the initial data poses no issue since $u_0$
and $t_0$ are constants, which is always the case for
ODEs.  So, $\cos(u_0)$, $\cos(t_0)$ etc. are polynomials, in fact constants.
This would not be the case, for example, if $u_0$ was not constant, but rather
a polynomial of degree greater than zero, as is the case with some DDE problems.
This will be addressed again in the example at the beginning of
Section \ref{sec:psmdde}.

Let $\mathbf{u}_k(t) = \boldsymbol{\Psi} + \sum_{i=1}^{d_k} \mathbf{a}_{ki} t^i$ be
PSM's approximation to an IVP with initial data $\Psi$.  It
was shown in \cite{PS} that PSM leaves invariant, in all subsequent iterations,
coefficients for powers of the argument $t$ smaller
than the current number of iterations, i.e. $\mathbf{a}_{ki} = \mathbf{a}_{k+n,i}$
for any $n \geq 0$ if $i \leq k$.    Then only $t^{k+1}$
needs to have its coefficient computed; prior powers of $t$ will have the
same coefficient, while the coefficients on larger powers
of $t$ change in later iterations.  A consequence discussed below is that these
terms will not be computed until a later iteration.
Note that only powers of $t$ less than or equal to $k$ are capable of
producing $t^{k+1}$ after integration.

For example, the array in Figure \ref{PA} would represent the case of a vector
field consisting of a multiplication of two components.  A general vector field
would be a linear combination of terms, each of which could be represented
by such an array.  The linear combination will not disturb what is found in
this particular example.  Assume the current
approximations are the quadratics $a_1 + b_1t + c_1 t^2$ and $a_2 + b_2t + c_2 t^2$.
The empty row and column are included for visual purposes to accommodate the
next power.  The underlined terms are not be computed by this integration.

\begin{figure}[ht] \label{PA}
\begin{center}
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{c|c|c|c|}
\multicolumn{1}{c}{}
& \multicolumn{1}{c}{$a_1$}
&\multicolumn{1}{c}{$b_1t$} 
&\multicolumn{1}{c}{$c_1t^2$} \\
 \cline{2-4}
$a_2$ & $a_1a_2t$ & $\frac{b_1a_2}{2}t^2$ & $\frac{c_1a_2}{3}t^3$ \\
  \cline{2-4}
 $b_2t$ & $\frac{a_1b_2}{2}t^2$ & $\frac{b_1b_2}{3}t^3$ & $\underline{t}^4$   \\
\cline{2-4}
 $c_2t^2$ & $\frac{a_1c_2}{3}t^3$ & $\underline{t}^4$ & $\underline{t}^5$   \\
 \cline{2-4}
\end{tabular}
\end{center}
\caption{Integration of vector field array - ODE}
\end{figure} 


To see why powers are invariant with subsequent iteration, note that the constant
terms $a_1$ and $a_2$ are fixed by the initial data for all iterations, so all
entries involving only these two coefficients are fixed for all iterations.
In addition, the ordering implies each off diagonal corresponds to a unique power
of $t$.  Hence, these two points imply that the $t^1$ terms, namely $b_1$ and $b_2$,
are fixed. This idea is extended via induction to each component \cite{PS}.

To see why the underlined terms aren't computed, note that only the off diagonals
above and including the main off diagonal are complete: in the example above,
 the $t^4$ off diagonal still needs the $t^3$ times the constant terms integrated
before all the $t^4$ terms have been produced in the array.
So the underlined $t^4$ terms are not be used in the vector field during the next
integration.  But they are included in integrations after that, when $t^4$ becomes
the main off diagonal for the array.

Computationally, one can specialize the integration to only compute terms with
powers one larger than the current degree.
The approximation's degree then only increases by one per iteration.
This power's coefficient is written as a scalar multiple
of the convolution $\sum a_j b_{d-j}$ for an approximation of degree $d$.
For convenience, the discussion here assumes only the relevant powers are computed,
and hence, the presentation uses the language that powers greater than the current
iteration will not be computed.

\section{DDEs and convergence of Picard iteration} \label{sec:eut}

The family of differential equations with deviating arguments that are considered is
\begin{equation} \label{DDE}
 \begin{gathered}
D^{L_1} u(t) = f\big(t,D_{\mathbf{L}_1}u(t),D_{\mathbf{L}_2}u( \Delta(t))
\big), \quad t > t_0 \\
u(t) = \Psi(t), \quad  t \in [a,t_0]
\end{gathered}
\end{equation}
with $L_1$, $L_2 \in \mathbb{N}$, $t_0 \in \mathbb{R}$ and $D$ being the differential operator
with the subscripted version indicating
an ordered list of derivatives of $u$.  The entries of the subscript specify which
derivatives, for example, $D_{[0,2,4]} u(t)$
would indicate $f$ has a $u(t)$, $u''(t)$ and $u''''(t)$ dependence.
For convenience, take $\mathbf{L}_1 = [0,\ldots,L_1]$ and
let $f$ absorb unused derivatives.  Similarly, take $\mathbf{L}_2 = [0,\ldots,L_2]$,
with $L_2 \in \mathbb{N}$ independent of $L_1$.

Following \cite{ELS} and \cite{NOR}, if $L_1 > L_2$ the equation is considered
 to be \emph{retarded}, if $L_1 = L_2$, then
the equation is considered to be \emph{neutral}, while if $L_1 < L_2$
the equation is considered to be \emph{advanced}.  In
practice, one would specify a particular $\mathbf{L}_1 \subseteq [0,\ldots,L_1]$
and $\mathbf{L}_2 \subseteq [0,\ldots,L_2]$
to use.  It is interesting to note that the approach developed here does not
need to be altered if the equation is retarded, neutral or advanced.

The delay structure $\Delta(t)$ as a function of $t$ is assumed to be continuous
and strictly increasing with $\Delta(t) < t$.
Otherwise, the delay structure is left as general as possible until it becomes
necessary to keep track of powers of
the delay argument, in which case only the linear structure
$\Delta(t) = \sigma t - \tau$ with some $0 < \sigma \leq 1$ and
$\tau \in \mathbb{R}^+$ are addressed.

A state dependent delay structure is also examined, once the role of the delay
structure in the solution is realized, but this will be formal.
Only one delay structure is addressed in this article, although an extension of
this approach to a single problem with several delay structures is possible.
In general, the delay structure $\Delta(t)$ determines the value of $a$.
The specifics of this will appear later in this section.

The initial data needs to be polynomial to adapt PSM to the DDE case.
If a problem has analytic initial data, for example
see \cite{NDE}, one could consider approximating via a Taylor polynomial.
However, two major proofs here will be relegated
to constant initial data.  The difficulties encountered for polynomial initial
data are pointed out after the result for the constant
case is established.  The polynomial initial data case will be addressed in \cite{ISQ}.

The standard change of variables converts \eqref{DDE} to a first order system,
so that the Picard iteration is applied to
\begin{equation} \label{DDEs}
\begin{gathered}
D\mathbf{u}(t)=\mathbf{f}\big(t,\mathbf{u}(t),\mathbf{u}(\Delta(t))\big),
\quad t > t_0 \\
\mathbf{u}(t)= \boldsymbol{\Psi}(t), \quad t \in [a,t_0]
\end{gathered}
\end{equation}
where $1 \leq l \leq \max L \equiv \{L_1,L_2\}$ and
 $\mathbf{u} = (u^l)^{L}_{l=0}$ is such that $u^l = D^l u$
and $\boldsymbol{\Psi} = (\Psi^l)^{L}_{l=0}$, hence, $\Psi^l = D^l \Psi$.
For constant $\Psi$, note that $D^l \Psi = 0$
for $l > 0$.  Thus, the approximation method to come may also be applied to
systems of higher order DDEs of the form \eqref{DDE}.

It can be shown that if the method of successive approximations is applied
to this first order system, then the sequence of approximations is uniformly
convergent and and its limit solves \eqref{DDEs} uniquely even if the vector
field is not polynomial or autonomous.  This proves the existence, uniqueness
and continuity of a solution to \eqref{DDE}.  The proof from \cite{CL}
is now adapted to the deviating argument case, and a general, but state
independent, delay structure is assumed.

Let $\mathbf{f}$ in \eqref{DDEs} be continuous with respect to $t$ and
Lipschitz continuous in the remaining variables on $[t_0,T^*]
\times [-U,U]^{2L}$, with $U$, $T^* < \infty$.  Such vector fields are called
\emph{admissible}.  Denote by $C_1^l$ the Lipschitz
constant of admissible $\mathbf{f}$ with respect to component $u^l(t)$ of
 $\mathbf{u}(t)$ and denote by $C_2^{l}$ the Lipschitz
constant of admissible $\mathbf{f}$ with respect to component $u^l(t)$ of
 $\mathbf{u}(\Delta (t))$.  Define $C_f \equiv \sum_{l=0}^{L}
C_1^l + C_2^l$ and $M_f \equiv \max_l \max_{[t_0,T^*]} \mathbf{f}$.
 General initial data is assumed in the following proposition.

\begin{proposition} \label{bc}
For admissible vector fields $\mathbf{f}$, let
$T \equiv \min \{T^*,UM_f^{-1}\}$ and let
$\boldsymbol{\Psi}$ in \eqref{DDEs} be analytic and such that
$ \max_l \max_{[a,T^*]} |\boldsymbol{\Psi}(t) - \boldsymbol{\Psi}(t_0)| < \infty$
and for $t> t_0$, let $\mathbf{u}_0(t) = \boldsymbol{\Psi}(t)$.
For each $k \geq 0$ define
\begin{equation} \label{PI}
\mathbf{u}_{k+1}(t)  = \mathbf{u}_k(\tau_0)
+ \int^t_{t_0} \mathbf{f}\big(s,\mathbf{u}_k(s),
\mathbf{u}_k ( \Delta(s))\big) \,\mathrm{d}{s}
\end{equation}
then $\mathbf{u}_k(t)$ converges uniformly to some $\mathbf{u}_*(t)$,
which solves \eqref{DDEs} uniquely over $[t_0,T]$.
\end{proposition}


\begin{proof}
 Analyticity of $\Psi$ implies $\mathbf{u}_0 \in C^1([t_0,T])$.
Applying induction, let $\mathbf{u}_k \in C^1([t_0,T])$.
Since each component of $\mathbf{f}$ is continuous, the Fundamental Theorem
of Calculus implies $\mathbf{u}_{k+1} \in C^1([t_0,T])$, so
$\mathbf{u}_k \in C^1([t_0,T])$ for $k \in \mathbb{Z}^+$.

Denote $u^l_k$ as a component of $\mathbf{u}_k(t)$ and denote $\Psi^l$ and
$f^l$ as components of $\boldsymbol{\Psi}$ and $\mathbf{f}$,
respectively.  Denote $f_k^l(t) = f^l(t,\mathbf{u}_k(t),\mathbf{u}_k (\Delta(t))$.
 One has for every $1 \leq l \leq L$ that
\[
|u^l_1 - u^l_0|(t) = \big| \Psi^l(t_0) + \int^t_{t_0} f^l_0(s) \,\mathrm{d}{s}
 - \Psi^l(t) \big|
\leq M_{\Psi} + M_f (t - t_0)
\]
which follows since $|\Psi^l(t)-\Psi^l(t_0)|
\leq M_{\Psi} \equiv \max_l \max_{[a,T]} |\boldsymbol{\Psi}(t)
- \boldsymbol{\Psi}(t_0)| < \infty$
by hypothesis.

Consider $|u^l_1 - u^l_0|(\Delta(t))$.  If $\Delta(t) \leq t_0$, then
\[
|u^l_1 - u^l_0|(\Delta(t)) = |\Psi^l(\Delta(t))-\Psi^l(\Delta(t))| = 0
\]
otherwise, if $\Delta(t) > t_0$, then
\[
|u^l_1 - u^l_0|(\Delta(t))
\leq \big| \Psi^l(t_0) + \int^{\Delta(t)}_{t_0} f^l_0(s) \,\mathrm{d}{s}
- \Psi^l(\Delta(t)) \big|
\leq M_{\Psi} + M_f(t-t_0)
\]
since $\Delta(t) < t$ by hypothesis.  So both $|u^l_1 - u^l_0|(t)$
and $ |u^l_1 - u^l_0|(\Delta(t))$ are bound by
$M_{\Psi} + M_f(t-t_0)$ for every $1 \leq l \leq L$.

From the Lipschitz continuity of $\mathbf{f}$ and the previous bound,
one has
\begin{equation} \label{E1}
| u^l_2-u^l_1|(t) \leq \int^t_{t_0} |f^l_1 - f^l_0|(s) \,\mathrm{d}{s}
\leq \int^t_{t_0} C_f \big(M_{\Psi} + M_f (s - t_0)\big) \,\mathrm{d}{s}
\end{equation}
so $|u^l_2-u^l_1|(t) \leq C_f \left(\frac{1}{2} M_f (t-t_0)^2 + M_{\Psi}(t-t_0)\right)$
 for every $1 \leq l \leq L$.  Moving to the
delayed terms, if $\Delta(t)<t_0$ then $|u^l_2-u^l_1|(\Delta(t)) = 0$ as before,
otherwise
\[
|u^l_2-u^l_1|(\Delta(t)) \leq \int^{\Delta(t)}_{t_0}
|f^l_1-f^l_0|(s) \,\mathrm{d}{s}
\leq \int^{t}_{t_0} |f^l_1-f^l_0|(s) \,\mathrm{d}{s}
\]
and so $|u^l_2-u^l_1|(\Delta(t))$ along with $|u^l_2-u^l_1|(t)$ are both bound by
$C_f \big(\frac{1}{2} M_f (t-t_0)^2 + M_{\Psi}(t-t_0)\big)$.
Continuing in this fashion shows that
\begin{equation} \label{E2}
\begin{aligned}
|u^l_{k+1}-u^l_k |(t)
& \leq \int^t_{t_0} |f^l_k - f^l_{k-1}|(s) \,\mathrm{d}{s} \\
& \leq \int^t_{t_0} C_f^k \Big(M_{\Psi} \frac{(t-t_0)^{k-1}}{k-1!}
+ M_f \frac{(t-t_0)^k}{k!} \Big) \,\mathrm{d}{s}
\end{aligned}
\end{equation}
subsequently, one has
\[ |
u^l_{k+1} - u^l_k|(t)
\leq C_f^k \Big(M_{\Psi} \frac{(t-t_0)^k}{k!} + M_f \frac{(t-t_0)^{k+1}}{k+1!}\Big)
\]
which implies that for every component $1 \leq l \leq L$,
\[
\sum^{\infty}_{k=0}|u^l_{k+1} - u^l_k|(t)
\leq \left(M_{\Psi} + M_f (t-t_0) \right)
\sum^{\infty}_{k=0} C_f^k \frac{(t-t_0)^k}{k!}
\]
and one can conclude that for each component $u^l_k$,
\[
u^l_{k+1} = \sum_k u^l_{k+1}-u^l_k \leq \left(M_{\Psi} + M_f (t-t_0) \right)
e^{C_f (t-t_0)}
\]
and so $u^l_k \to u_*^l$ uniformly by Weierstrass M-test with $u_*^l$ continuous.
Hence \eqref{PI} holds in the limit,
which is equivalent to $\mathbf{u}_* = (u_*^l)^L_{l=1}$ solving \eqref{DDEs}
since the uniform convergence allows
the limit to be exchanged with the integral.  Using the bounds developed above
and adapting the argument from \cite{NOR}, it is straightforward to show that
$\mathbf{u}_*$ is unique.
\end{proof}

 The method of successive approximation is a global approximation, in that it
updates $\mathbf{u}_k$ over $[\tau_0,T]$.  However,
such a global update is computationally difficult with a DDE.  On the other hand,
the method of steps, see \cite{CTH} for example, is
computationally friendly but at the expense of being a local approximation.
For the method of steps, the approach is to reduce \eqref{DDE}
to a sequence of ODEs: given $t_0$, then prior to a certain point in time,
to be determined, terms like $\mathbf{u}(\Delta(t))$ are found in
terms of $\Psi$, which is known, so that \eqref{DDEs} defaults to an ODE,
although it may have non-constant coefficients and/or
non-homogeneous terms.

Define $\tau_0 \equiv t_0$ and denote by $\tau_1$ the point in time prior to
which $\mathbf{u}(\Delta(t))$ can be found in terms of $\Psi$.
By standard theory, if the vector field is Lipschitz continuous, then the
unique solution that ensues would be valid over $[\tau_0, \tau_1]$.
Denote this solution by $\mathbf{u}_1(t)$.  In order for the delay term
$\mathbf{u}(\Delta(t))$ in the vector field to be known, $\Delta(t)$ must be
less than or equal to $\tau_0$ over $[\tau_0,\tau_1]$.
Using the fact that $\Delta$ is strictly increasing and solving $\Delta(t) = \tau_0$
for $t$ would determine $\tau_1$.  In addition, $\Delta(\tau_0)$ represents
the farthest back in time information is needed for the vector field,
and so $\Delta(\tau_0)  = \tau_{-1} = a$.

With the problem solved over $[\tau_0, \tau_1]$, consider $\tau_m$ defined
as the solution to $\Delta(t) = \tau_{m-1}$.  Define
$\mathcal{T} \equiv \{\tau_m : \Delta(\tau_m) =\tau_{m-1}, m \in \mathbb{Z}^+_0 \}$
and let $\mathbf{u}_{m}(t)$ be
the unique solution to \eqref{DDEs} over $[\tau_{m},\tau_{m+1}]$.
Consider solving \eqref{DDEs} over $[\tau_{m+1},\tau_{m+2}]$ to
get $\mathbf{u}_{m+1}$.  Then the delay terms
$D_{\mathbf{L}_2} \mathbf{u}_{m+1}(\Delta(t))$ for $t > \tau_{m+1}$ would involve
$\mathbf{u}_{m}(t)$, which is known.  Denote the unique solution by
$\mathbf{u}_{m+1}(t)$.  Hence
\begin{equation}
\mathbf{u}(t) = \{\mathbf{u}_{m}(t)  \text{ over $[\tau_m,\tau_{m+1}]$ for }
 m \in \mathbb{Z}^+\}
\label{MOS}
\end{equation}
solves \eqref{DDEs}.  The overlap in endpoints for the $\tau$-steps is not
an issue due to the continuity of the solution
to \eqref{DDEs} as per Proposition \ref{bc}.

The method of steps is represented by the intervals $[\tau_m,\tau_{m+1}]$,
or \emph{$\tau$-steps}, and computationally this approach
is restrictive in the sense that it updates the solution one $\tau$-step at
a time: the solution needs to be known in $[\tau_{m-1},\tau_m]$
before it can be determined in $[\tau_m,\tau_{m+1}]$.

Anticipating the Picard iteration, which will cause the delay structure to
be composed with itself, notice that $\Delta: [\tau_m,\tau_{m+1}] \to
[\tau_{m-1},\tau_m]$ satisfies
\begin{equation} \label{DC}
\Delta_m(t) \equiv  \Delta \circ \cdots \circ \Delta (t)
\end{equation}
 with $\Delta_1(t) = \Delta(t)$ and $m > 0$.
Define $\Delta_0(t) \equiv t = t- \tau_0$ and $\Delta_{-1}(t) \equiv t-\tau_{-1}$,
where this last definition is for notational convenience.
Then $\Delta_m(\tau_m) = 0$ for all $m \geq -1$.  Note that the linear
in time and state independent delay
\begin{equation} \label{LDS}
\Delta(t)= \sigma t + \tau_{-1}
\end{equation}
with $0 < \sigma \leq 1$ and \emph{lag} $\tau_{-1} < 0$, includes the constant
lag case: $\Delta(t) = t - \tau$ for some fixed $\tau>0$.
In this case, one has $\tau_m = m\tau$ for all $m \geq 0$,
while $\Delta_m(t)$ reduces to $t - m\tau$.

\section{dPSM and examples} \label{sec:psmdde}

The PSM philosophy of creating a polynomial environment on which Picard
iteration thrives is now adapted to \eqref{DDEs}.
It is important to note that a change of variable for the delay terms can
come for free, because they may not require an
evolution line in the system.  Once $u(t)$ evolves, one additional functional
evaluation will give the evolution of $u(\Delta(t))$,
and taking derivatives will also evolve $u'(\Delta(t))$, $u''(\Delta(t))$  etc.
as needed because the Picard iteration explicitly
depends on previously computed information.  This contributes to making the
distinction between retarded, neutral and
advanced unnecessary as far as implementing this approach is concerned.

\begin{example} \label{examp2} \rm
 Ignoring the initial information, suppose
$u'(t) = \cos(u(t)) + u(t-\tau) + u'(t-\tau)$ with $t \geq t_0 \neq 0$.
Following the example in Section 2, introduce $V = \cos(U)$ and $W = \sin(U)$
along with $\underline{t} = t - t_0$, $u(t) = U(\underline{t})$,
$R = u(t-\tau)$ and $S = u'(t-\tau)$, so that $U' = V + R + S$,
$V' = -W U' = -W(V + R + S)$ and $W' = VU' = V(V + R + S)$.
Then $u' = \cos u + u(t-\tau) + u'(t-\tau)$ is equivalent to the autonomous
polynomial system
\begin{equation} \label{exd}
 x_4(t) =  \begin{cases}
U'= V + R + S \\
V'= -W(V + R + S) \\
W'= V(V + R + S)
\end{cases}
\end{equation}

The explicit nature of successive approximations allows the update of $R$
via $R' = S$ to be replaced with a function evaluation from $U$'s evolution,
$R(\underline{t}) = u(t - \tau) = U(\underline{t}-\tau)$.
The system may be handled in its current foem, since the vector fields are
evaluated at the previous approximation, so only $R$'s (and $S$'s)
initial information is needed.

There may be a price to pay for the $V$ and $W$ change of variable:
if the initial data is not constant then $\cos(U)$ and $\sin(U)$
will not be polynomials.   This could be handled by using an appropriate
Maclaurin polynomial, where the error could possibly be
controlled through the choice of the polynomial's degree.
Computationally, there is also a price to pay for non-constant initial data
in general.
\end{example}

 Parallel to \cite{PS}, the vector field needs to be autonomous, and evolution
needs to begin at $t_0=0$.  A change of variable as per the previous example will
 account for this, and the explicit $t$ dependence can be dropped from $\mathbf{f}$'s
argument with no other change in notation.  The variable $\underline{t}$, as per
the last example, is relabeled as $t$ and the set $\mathcal{T}$ is then determined
using this new time variable.  Invoking the notation for the method of steps,
the IVP \eqref{DDEs} becomes, with $t_0 = \tau_0 = 0$,
\begin{gather*} %\label{DDEs2}
\mathbf{u}'(t)= \mathbf{f}(\mathbf{u}(t),\mathbf{u}( \Delta(t))), \quad t > \tau_0 \\
\mathbf{u}(t)=\boldsymbol{\Psi}(t), \quad  t \in [\tau_{-1},\tau_0]
\end{gather*}
 where $\mathbf{u}$ now includes not only derivatives of $u$ but the auxiliary
variables necessary to make the vector field be
polynomial, autonomous and have $t_0 = 0$.  Although an ordering for the vector
is important in practice, there is no reliance on
ordering in the proofs, hence one is not specified.

To compute the Picard iteration over $[\tau_0,T]$, the method of steps is used
and a very nice computational  structure
appears once the initial data $\boldsymbol{\Psi}$ is extended over the entire
interval $[\tau_0,T]$.   One has
\begin{equation} \label{PIs}
 \mathbf{u}_{k+1}(t)  = \mathbf{u}_k(\tau_0)
+ \int^t_{\tau_0} \mathbf{f}(\mathbf{u}_k(s),
\mathbf{u}_k( \Delta(s))) \,\mathrm{d}s
\end{equation}
 for $t \in [\tau_0,T]$ and $k \geq 0$.  Combining with the method of steps notation,
let $\mathbf{u}_{k}(t) = \mathbf{u}_{km}(t)$
over $[\tau_m,\tau_{m+1}]$ with $m \in \mathbb{Z}^+_0$.  Then \eqref{PIs} becomes
\emph{dPSM}, or the \emph{delayed Parker-Sochacki} method,
as stated.
\begin{equation} \label{MP}
{\mathbf{u}_{k+1,\underline{m}}(t)
= \boldsymbol{\Psi}(\tau_0)
+ \sum^{\underline{m}-1}_{m=0} \int^{\tau_{m+1}}_{\tau_m}
\mathbf{f}_{km}(s) \,\mathrm{d}{s}
+ \int^t_{\tau_{\underline{m}}} \mathbf{f}_{k,\underline{m}}(s) \,\mathrm{d}{s}}
 \end{equation}
 The iterative step of \eqref{MP} assumes
$\mathbf{f}_{k,\underline{m}}(s) \equiv \mathbf{f}(\mathbf{u}_{k,\underline{m}}(s),
\mathbf{u}_{k,\underline{m}-1}(\Delta(s)))$.
In the next section, it is shown that this calculation can be modified and performed
for only $\underline{m}=k+1$, and it will `contain'
$\mathbf{u}_{k,\underline{m}}(t)$ for each $0 \leq \underline{m} < k+1$.
A few comments about notation: some indices are comma separated for clarity,
and to streamline notation in later sections, $m$ is changed to $\underline{m}$
so that $m$ can be used as a local index for $\underline{m}$.

If an admissible vector field has at least one delay term in each component,
i.e. for each component $f^l$ of $\mathbf{f}$, the vector of partial
derivatives with respect to to the delay terms
$f^l_{\mathbf{u}(\Delta(t))} \neq {\bf 0}$, the vector field $\mathbf{f}$
is called \emph{fully delayed}.
For the case of a fully delayed vector field and the linear delay in \eqref{LDS},
it can be established that the following piecewise structure
represents how information propagates for any component $l$ of $\mathbf{u}_k$,
which happens to be the fastest possible propagation,
\begin{equation}
u^l_k(t) =  \begin{cases}
\Psi(\tau_0) + p_{k0}(\Delta_0(t)) & t \in [\tau_0,\tau_1] \\
\Psi(\tau_0) + p_{k0}(\Delta_0(t)) + p_{k1}(\Delta(t))  & t \in [\tau_1,\tau_2] \\
\Psi(\tau_0) + p_{k0}(\Delta_0(t)) + p_{k1}(\Delta(t)) + p_{k2}(\Delta_2(t)) &
t \in [\tau_2,\tau_3] \\
\dots\\
\Psi(\tau_0) + p_{k0}(\Delta_0(t)) + \cdots + p_{k,k-1}(\Delta_{k-1}(t)) &
 t \in [\tau_{k-1},\tau_k] \\
 \Psi(\tau_0) + p_{k0}(\Delta_0(t)) + \cdots + p_{k,k-1}(\Delta_{k-1}(t)) &
t \in [\tau_k,T]
\end{cases}  \label{IT}
\end{equation}
where $p_{km}$ is a polynomial whose coefficients depend on the iteration
$k$ and which delay structure $\Delta_{m}(t)$ is in the
argument, but not the $\tau$-step that contains $t$.  In particular, in each
$\tau$-step, a new delay structure appears, and earlier delay
structures have invariant coefficients, so that the polynomial $\mathbf{p}_{km}$
has coefficients that depend on $m$ in its argument $\Delta_m$
but not on $m$ where $t \in [\tau_m,\tau_{m+1}]$.   However,  the $\tau$-step
does affect the appearance of the delay structure, in that
$\Delta_m$ first appears when $t \in [\tau_m,\tau_{m+1}]$.  In compact notation,
\[
 u^l_{k,\underline{m}}(t) = \Big(\sum^{\underline{m}_k}_{m=-1} p^l_{km}(\Delta_m(t)),
 \quad t \in [\tau_{\underline{m}},\tau_{\underline{m}+1}]
\text{ with }  p^l_{km}(z) = \sum^d_{i=m+1} a^i_{km} z^i \Big)
\]
for $\underline{m}_k \equiv \min \{\underline{m},k-1\}$ where the constant term
$\Psi(\tau_0)$ is hidden in the $\Delta_{-1}(t)$ terms, with $d=0$
if $\underline{m}=-1$.  Theorem \ref{FT} in the next section will establish this
structure.  In addition, the polynomials also exhibit an invariance
across iterations, but this is a little more complicated than the PSM version
and will be handled in Theorem \ref{IAI}.

In \eqref{IT}, note that the functional form in $[\tau_k,T]$ is the same as the
functional form in $[\tau_{k-1},\tau_k]$.  All the currently
computed information has propagated to $[\tau_{k-1},\tau_k]$ and its replication
in $[\tau_k,T]$ is just a mimic of extending the initial
data to $[\tau_0,T]$.  This is helpful because of the invariance across
$\tau$-steps and iterations will show these terms belong
in the solution eventually and thus they provide a good forward approximation
in regions where information has yet to propagate,
thereby extending the `local' method of steps.

For general delay structures, when \eqref{MP} is enacted on a fully delayed
vector field, there is a nonzero finite speed of propagation
of the initial information $\boldsymbol{\Psi}$ into $[\tau_0,T]$ with each iteration,
in particular, one $\tau$-step per iteration.  This will manifest
in an element $\tau^k_* \in \mathcal{T}$ with $m^*$ such that $\tau^k_* = \tau_{m^*}$,
for which the solution will have a different
functional dependence in the $\tau$-step $[\tau_{m^*},\tau_{m^*+1}]$
than it did in $[\tau_{m^*-1},\tau_{m^*}]$, and $m^*$ will
be the largest such $m$ for which this is true.

Call this largest element of $\mathcal{T}$ for which a change in functional form
occurs across that element the \emph{extension boundary}.
It will also imply that it is the smallest $\tau_m$ for which $\mathbf{u}_{km}$
has the same functional dependence as $\mathbf{u}_{k,m+1}$.
Here is an example that shows a glimpse of extension boundary behavior when
the vector field is not fully delayed.

\begin{example} \label{examp3} \rm
 Consider $x' = y$, $y' = z$ and $z' = x(t-\tau)$, each with constant initial
data of unity, and note that the right hand side of
$x$ and $y$ lack delay terms, and as such the vector field for this problem
is not fully delayed.  Using \eqref{MP} one can compute $x_5(t)$,
$y_5(t)$, and $z_5(t)$.
\[
x_5(t) =  \begin{cases}
x_{50}(t) = 1+t+\frac{t^2}{2!}+\frac{t^3}{3!} & t \in [\tau_0,\tau_1] \\
x_{51}(t) = 1+t+\frac{t^2}{2!}+\frac{t^3}{3!} + \frac{(t-\tau)^4}{4!}
 +\frac{(t-\tau)^5}{5!} & t \in [\tau_1,\tau_2] \\
x_{52}(t) = 1+t+\frac{t^2}{2!}+\frac{t^3}{3!} + \frac{(t-\tau)^4}{4!}
 +\frac{(t-\tau)^5}{5!} &  t \in [\tau_2,T]
 \end{cases}
\]

\[
 y_5(t) = \begin{cases}
y_{50}(t) = 1+t+\frac{t^2}{2!} & t \in [\tau_0,\tau_1] \\
y_{51}(t) = 1+t+\frac{t^2}{2!}+ \frac{(t-\tau)^3}{3!}
 + \frac{(t-\tau)^4}{4!}+\frac{(t-\tau)^5}{5!} & t \in [\tau_1,\tau_2] \\
y_{52}(t) = 1+t+\frac{t^2}{2!}+\frac{(t-\tau)^3}{3!}
+ \frac{(t-\tau)^4}{4!}+\frac{(t-\tau)^5}{5!} & t \in [\tau_2,T]
\end{cases}
\]

\[
 z_5(t) =  \begin{cases}
z_{50}(t) = 1+t & t \in [\tau_0,\tau_1] \\
z_{51}(t) = 1+t+ \frac{(t-\tau)^2}{2!} + \frac{(t-\tau)^3}{3!}
 + \frac{(t-\tau)^4}{4!} & t \in [\tau_1,\tau_2] \\
z_{52}(t) = 1+t+\frac{(t-\tau)^2}{2!}+\frac{(t-\tau)^3}{3!}
 + \frac{(t-\tau)^4}{4!}+\frac{(t-2\tau)^5}{5!} & t \in [\tau_2,T]
\end{cases}
\]
 A partial calculation of $z_5(t)$ will be given, based on
\[
 x_4(t) =  \begin{cases}
x_{40}(t) = 1 + t + \frac{t^2}{2!} + \frac{t^3}{3!} & t \in [\tau_0,\tau_1] \\
x_{41}(t) = 1 + t + \frac{t^2}{2!} + \frac{t^3}{3!} + \frac{(t-\tau)^4}{4!}
 & t \in [\tau_1,T]
\end{cases}
\]
which will be computed over $[\tau_2,\tau_3] = [2\tau,3\tau]$, i.e.
$z_{52}(t)$,
\begin{align*}
z_{52}(t)
&= 1 + \int^t_0 x_4(s-\tau) \,\mathrm{d}{s} \\
& = 1 + \int^{\tau}_0 \Psi(s-\tau) \,\mathrm{d}{s}
 + \int^{2\tau}_{\tau} x_{40}(s-\tau) \,\mathrm{d}{s}
+ \int^t_{2\tau} x_{41}(s-\tau) \,\mathrm{d}{s} \\
&=1 + \int^{\tau}_0 1 \,\mathrm{d}{s} + \int^t_{\tau} 1
 + (s-\tau) + \frac{(s-\tau)^2}{2!} + \frac{(s-\tau)^3}{3!} \,\mathrm{d}{s} \\
& \quad + \int^t_{2\tau} 1 + (s-\tau) + \frac{(s-\tau)^2}{2!}
 + \frac{(s-\tau)^3}{3!} + \frac{(s-2\tau)^4}{4!} \,\mathrm{d}{s} \\
&\stackrel{*}{=} 1 + \int^t_0 1 \,\mathrm{d}{s} + \int^t_{\tau} (s-\tau)
 + \frac{(s-\tau)^2}{2!} + \frac{(s-\tau)^3}{3} \,\mathrm{d}{s}
 + \int^t_{2\tau}  \frac{(s-2\tau)^4}{4!} \,\mathrm{d}{s}  \\
& = 1 + t + \frac{(t-\tau)^2}{2!} + \frac{(t-\tau)^3}{3!}
 + \frac{(t-\tau)^4}{4!} + \frac{(t-2\tau)^5}{5!}\,.
\end{align*}
 It will be shown in the next section that the regrouping of integrals that
occurs in the fourth equality above ($\stackrel{*}{=}$)
will occur in general.  After proving the speed of the extension boundary with
 respect to $k$ for a fully delayed vector field, a formal
discussion of this example's  extension boundary propagation will be given
in the next section.
\end{example}

\section{dPSM Results - I} \label{sec:res}
 The speed with respect to iterations $k$ will be proved for a general state
independent delay and for a fully delayed vector field.
Note that the following result is independent of the initial data, constant
or non-constant.

\begin{theorem}  \label{extbndy}
Assuming $\mathbf{u}_k$ does not solve \eqref{DDEs} and $\mathbf{f}$ is a fully
delayed vector field, then extension boundary for $\mathbf{u}_k$ is
$\tau^k_* = \tau_{k-1}$, $k \geq 1$, i.e. $\mathbf{u}_{k,k-1}(t)$ has the same
functional form as $\mathbf{u}_{k m}(t)$ for every $m > k-1$, but not the same
functional form as $\mathbf{u}_{k,k-2}(t)$.
\end{theorem}

\begin{proof} Since the data begins at $\tau_{-1}$, there is a change in
functional form from non-existent to $\Psi$ as time
crosses over $\tau_{-1}$ for $\mathbf{u}_0$, and this is the largest element
of $\mathcal{T}$ for which this is true.  If $\boldsymbol{\Psi}$ solves
\eqref{DDEs}, then iterating further is not necessary, hence the assumption
that $\mathbf{u}_k$ doesn't solve \eqref{DDEs}.   Otherwise,
$\tau^1_* = \tau_0$ for $\mathbf{u}_1$ since
\[
\mathbf{u}_{10}(t) = \boldsymbol{\Psi}(\tau_0) + \int^t_{\tau_0}
\mathbf{f}(\boldsymbol{\Psi}(s),\boldsymbol{\Psi}(\Delta(s))) \,\mathrm{d}{s}
\neq \boldsymbol{\Psi}(t)
\]
and there is a change in functional form from $\Psi$ to $\mathbf{u}_{10}$
across $t = \tau_0$.

By extending the given data in $[\tau_{-1},\tau_0]$ to each $\tau$-step
$[\tau_m,\tau_{m+1}]$ with $m \in \mathbb{Z}^+_0$,  the
integral in \eqref{PIs} for $\mathbf{u}_1$ can be determined in each
subsequent $[\tau_{m+1},\tau_{m+2}]$ as well via \eqref{MP}
\begin{equation} \label{FI}
\mathbf{u}_{1\underline{m}}(t)
= \mathbf{u}_{1,\underline{m}-1}(\tau_{\underline{m}})
+ \int^t_{\tau_{\underline{m}}} \mathbf{f}
\big(\mathbf{u}_{0,\underline{m}}(s),\mathbf{u}_{0,\underline{m}-1}(\Delta(s) )
\big) \,\mathrm{d}{s}
\end{equation}

Since $\mathbf{u}_{0,\underline{m}} = \boldsymbol{\Psi}(t)$ for each
$\underline{m}\geq0$, then \eqref{FI} has the same integrand for $m \geq 0$,
hence $\mathbf{u}_{1,\underline{m}}$ has the same functional form as
$\mathbf{u}_{10}$ for any $\underline{m} \geq 0$.  Technically,
$\mathbf{u}_{1,\underline{m}}$ and $\mathbf{u}_{10}$ cannot be equated when
$\underline{m} \geq 1$, even though they have the same
functional form, because of the differing domains:
$[\tau_{\underline{m}},\tau_{\underline{m}+1}]$ versus $[\tau_0,\tau_1]$.

By hypothesis, if $\tau^k_* = \tau_{k-1}$ for $\mathbf{u}_k$ then consider
 \eqref{MP} with $\underline{m} =k-1$
\begin{equation}
\mathbf{u}_{k+1,k-1}(t)
= \mathbf{u}_{k+1,k-2}(\tau_{k-1}) + \int^t_{\tau_{k-1}}
\mathbf{f}\left(\mathbf{u}_{k,k-1}(s),\mathbf{u}_{k,k-2}
(\Delta(s))\right) \,\mathrm{d}{s}\,. \label{FIh}
\end{equation}
When considering $\mathbf{u}_{k+1,k}$, this does not have the same integrand
as $\mathbf{u}_{k+1, k-1}$,
\begin{equation}
\mathbf{u}_{k+1,k}(t)
= \mathbf{u}_{k+1,k-1}(\tau_k) + \int^t_{\tau_k} \mathbf{f}
\left(\mathbf{u}_{k,k}(s),\mathbf{u}_{k, k-1}
(\Delta(s))\right) \,\mathrm{d}{s} \label{SI}
\end{equation}
since the extension boundary is at $\tau_{k-1}$ for $\mathbf{u}_k$ implying
that $\mathbf{u}_{k,k-1}$ and $\mathbf{u}_{k,k}$ have the
same functional form, but that $\mathbf{u}_{k,k-1}$ and $\mathbf{u}_{k,k-2}$ will not.
  Since the partials of $\mathbf{f}$ with respect to to the
delay terms are not all zero by hypothesis, and $\mathbf{u}_k$ does not solve
 \eqref{DDEs}, then \eqref{FIh} has a different integrand, and
hence functional form than \eqref{SI}.
The integral for $\mathbf{u}_{k+1,k+1}$ looks like
\[
\mathbf{u}_{k+1,k+1}(t) =  \mathbf{u}_{k+1,k}(\tau_{k+1})
+ \int^{t}_{\tau_{k+1}} \mathbf{f} \left(\mathbf{u}_{k,k+1}(s),\mathbf{u}_{k,k}
(\Delta(s))\right) \,\mathrm{d}{s}
\]
and since the extension boundary for $\mathbf{u}_k$ is at $\tau_{k-1}$
for $\mathbf{u}_k$, this is the same integrand as in \eqref{SI}.
Hence there is a change in functional form across $t=\tau_k$ but not across
$t=\tau_{k+1}$ and so the extension boundary has
moved to $\tau_k$ for $\mathbf{u}_{k+1}$.
\end{proof}

\noindent\textbf{Example \ref{examp3} (cont.)}
 Because of the lack of a delay term in each component's right hand side,
it can be seen easily that the information has not reached $[4\tau,5\tau]$
as would have been predicted by Theorem \ref{extbndy}.
Suppress iteration dependence and label the
extension boundary for component $x_k$ by $\tau^x_*$ and similarly for components
$y_k$ and $z_k$.  The movement of the extension
boundary can be described heuristically as follows:  assuming $\Psi$ does
not solve the DDE, upon computing the first iteration,
$\tau^x_* = \tau^y_* = \tau^z_* = \tau_0$ and all three move as predicted
by Theorem \ref{extbndy}.

However, computing the second iteration, both $x$ and $y$ are governed by ODEs
rather than DDEs, and so the information in $[0,\tau]$
is updated, but lack of a delay term in the vector field does not create new
delay terms in $[\tau,2\tau]$, hence $\tau^x_* = \tau^y_* =\tau_0$.
In contrast, $z$ does have a delay term in its vector field and so, the
second iteration does produce new delay terms
in $[\tau,2\tau]$ and so $\tau^z_* = \tau_1$ as Theorem \ref{extbndy}
would indicate for a fully delayed vector field.

Now, these new delay terms in $z$ during the computation of the third
iteration produce new delay terms for $y$ in $[\tau,2\tau]$ since
the vector field for $y$ is $z$ dependent, and so $\tau^y_*=\tau$.
However, $x$, whose vector field is $y$ dependent, will have to
wait one more iteration to see a change due to the new delay terms created
in $z$ during the second iteration, and only its info in
$[0,\tau]$ updates.  Because $x$ has not changed, one finds that $z$ will not
see new delay terms and $\tau^z_*=\tau$.

During the fourth iteration, the new delay terms in $z$ from the second
iteration finally reach $x$ and $\tau^x_* = \tau$,
while both $y$ and $z$ only update their previous information and
$\tau^y_* = \tau^z_* = \tau$.  Hence, after the initial
movement in the first iteration, it requires three iterations to have all
components' extension boundaries move again. \hfill\qed
\smallskip

 The speed of the extension boundary with respect to $k$ is independent
of the form of $\Delta(t)$.  The choice of $\Delta(t)$
of the form given in \eqref{LDS} is only necessary to make sorting terms arising
from the integration manageable in a general setting.
Given a specific polynomial form for $\Delta(t)$, one may be able to extend
the proofs here to the specific case, upon knowing what
terms to track.  On the flip side, with $\Delta(t)$ given by \eqref{LDS},
this will show that the only special treatment time dependent delay
structures require are how to sort their terms.

The following lemma's result is not surprising, but it helps track arbitrary
nonlinear interaction terms in the vector field.  Beyond
that, the ramifications of the result are important from a computational aspect.
Integration in the following form, which occurs eventually
in all nonlinear problems, implies a specific power for a specific delay
structure $\Delta_m(t)$ contributes a range of powers in the result.


\begin{lemma}  \label{FL}
Given $\alpha$, $\beta \in \mathbb{Z}^+$ and $\Delta(t)$ as in \eqref{LDS} and
$\Delta_m(t)$ in \eqref{DC}, let
\begin{equation} \label{NLI}
I(t) = \int^{t}_{\tau_m} \Delta^{\alpha}_m(s) \Delta^{\beta}_{\underline{m}}(s)
\,\mathrm{d}{s}
\end{equation}
 with $m > \underline{m}$ so that $\tau_m > \tau_{\underline{m}}$.
Then $I(t)$ is a polynomial with argument $\Delta_m(t)$ which can be written
in the form
\begin{equation} \label{NLP}
\sum^{\alpha+\beta+1}_{i=\alpha+1} c_i \Delta^i_m(t)
\end{equation}
with $c_i$ constant with respect to $t$.
\end{lemma}

\begin{proof}
 Let $\tau \equiv -\tau_{-1}$ and some straightforward computation shows
that $\tau_m = \tau \sum^m_{q=1} \sigma^{-q}$ and
$\Delta_m(t) = \sigma^m(t - \tau_m)$. Rather than Taylor expand the integrand,
consider the substitution given by
$S = \Delta_m(s)$, $\,\mathrm{d}{S} = \sigma^m \,\mathrm{d}{s}$ in \eqref{NLI}.
Rewriting
\[
\tau_{m} - \tau_{\underline{m}} = \tau \sum^m_{q=\underline{m}+1} \sigma^{-q}
\equiv \gamma_{m,\underline{m}} \tau
\]
and using the binomial expansion on
$(\sigma^{\underline{m}-m} S + \sigma^{\underline{m}}
\gamma_{m,\underline{m}} \tau)^{\beta}$, we have
\[
I(t) = (\sigma^{\beta(\underline{m}-m)-m})\sum^{\beta}_{i=0}
\binom{\beta}{i} \frac{(\gamma_{m,\underline{m}} \tau)^{\beta - i}}
{i+\alpha+1} (\sigma^m(t-\tau_m))^{i+\alpha+1}
\]
and note that the powers of $\tau$ are independent of $\alpha$.
Upon shifting the index, letting
\[
c_i = (\sigma^{\beta(\underline{m}-m)-m}) \sum^{\alpha+\beta+1}_{i=\alpha+1}
\binom{\beta}{i-\alpha-1} \frac{(\gamma_{m,\underline{m}}
\tau)^{\alpha+\beta+1 - i}}{i}
 \]
and exchanging $\sigma^m (t-\tau_m)$ with $\Delta_m(t)$, one recovers
\eqref{NLP}.
\end{proof}

  For the constant lag case, $\Delta_m(t) = t - m\tau$ and
$\gamma_{m,\underline{m}} = m - \underline{m}$.  Nonlinear interaction in the
vector field will cause \eqref{NLI} and so the coefficients in the power
series for the approximation will be lag dependent.  These lag
dependent coefficients will also occur when the initial data in \eqref{DDEs}
is not constant.

The next theorem will show that there is a particular structure to the Picard
iterations with respect to the $\tau$-steps during a fixed iteration.
This is simpler to demonstrate than the structure with respect to iterations $k$.


\begin{theorem} \label{FT}
If $\mathbf{u}_{0,\underline{m}}(t) = \boldsymbol{\Psi}$, i.e. constant
initial data, for $\underline{m} \in \mathbb{Z}$ with $\underline{m} \geq -1$,
then define $\mathbf{u}_{k,\underline{m}}(t)$ by \eqref{MP} for $k \geq 1$ with
$\Delta(t)$ as in \eqref{LDS}.
If $t \in [\tau_{\underline{m}},\tau_{\underline{m}+1}]
\subset [\tau_0,T]$, $\mathbf{f}$ is fully delayed and defining
$\mathbf{a}^0_{k,-1} \equiv \Psi(\tau_0)$, then $\mathbf{u}_{k,\underline{m}}(t)$
has the form for $k \geq 0$
\begin{equation} \label{TP}
\mathbf{u}_{k,\underline{m}}(t) = \sum_{m=-1}^{\underline{m}_k} \mathbf{p}_{km}(t)
\end{equation}
where $\underline{m}_k \equiv \min\{\underline{m},k-1\}$ and
$\Delta_0(t) = t-\tau_0$ and $\Delta_{-1}(t)=t-\tau_{-1}$.  In addition,
$\mathbf{p}_{km}$ is a polynomial of the form
\begin{equation} \label{TP2}
\mathbf{p}_{km}(t) = \sum^{d}_{i = m+1} \mathbf{a}^i_{km} \Delta^i_m(t)
 \end{equation}
with $d \in \mathbb{Z}^+$, $d \geq k$ if $m \geq 0$, $d=0$ if $m = -1$, and the
domain of $\mathbf{p}_{km}$ is $[\tau_m,T]$.
\end{theorem}

An important point is that the only $\underline{m}$ dependence
in $\mathbf{u}_{k,\underline{m}}$ occurs from $\underline{m}_k$.

\begin{proof}[Proof of Theorem \ref{FT}]
Clearly, starting with polynomial (constant) initial data and enacting
\eqref{MP} implies $\mathbf{u}_k$ is polynomial for
any polynomial vector field and any $k\geq 0$.  Hence the focus is on the
specific forms of the polynomials in \eqref{TP} given by \eqref{TP2}.

If the the initial data $\boldsymbol{\Psi}$ is constant,
then \eqref{FI} yields for all $\underline{m} \geq 0$
\[
\mathbf{u}_{1\underline{m}}(t) = \boldsymbol{\Psi}(\tau_0) + \mathbf{z}(t-\tau_0)
=  \boldsymbol{\Psi}(\tau_0) + \mathbf{p}_{10}(t)
\]
with $\mathbf{a}^{1}_{10} = \mathbf{z}$ for every
$[\tau_{\underline{m}},\tau_{\underline{m}+1}]$, where
$\mathbf{z} = \mathbf{f}(\boldsymbol{\Psi},\boldsymbol{\Psi})$
is constant since $\Psi$ constant.  It follows that both \eqref{TP}
and \eqref{TP2} hold with $d=1$ when $\underline{m} \geq 0$ and $d=0$ when
$\underline{m} = -1$ for each $\tau$-step of $\mathbf{u}_1(t)$.

Some convenient notation will now be introduced.  Since $\mathbf{f}$ is
polynomial, its full Taylor series is a finite sum with respect to any center.
Denote the Taylor series for $\mathbf{f}(a+c,b+d)$ without its constant term
by $\mathbf{f}^*(a,b;c,d)$, which represents a power series
of the form
\[
\mathbf{f}^*(a,b;c,d) \equiv \sum^n_{(i,j) \neq }
\sum^n_{(0,0)} C_{ij}(a,b) c^i d^j
\]
for some set of constants $C_{ij}$ which depend on $a$ and $b$.
In addition, we denote
\[
\mathbf{f}_{k,\underline{m}}(s) \equiv \mathbf{f}
\left( \mathbf{u}_{k,\underline{m}}(s),\mathbf{u}_{k,\underline{m}-1}(\Delta(s))
\right)
\]
as well as
\[
\mathbf{f}^*_{k,\underline{m}-1}(s) \equiv \mathbf{f}^*
 \left(\mathbf{u}^*_{k,\underline{m}-1}(s), \mathbf{u}^*_{k,\underline{m}-2}
(\Delta(s));\mathbf{p}_{k,\underline{m}}(s),
\mathbf{p}_{k,\underline{m}-1}(\Delta(s))\right)
\]
Using \eqref{TP} and \eqref{TP2} as hypothesis for $\mathbf{u}_k$, consider
$\mathbf{u}_{k+1}$ during any $\tau$-step $[\tau_{\underline{m}},
\tau_{\underline{m}+1}]$ with $\underline{m} \leq k-1$, so that
\eqref{MP} implies
\begin{equation*}
\mathbf{u}_{k+1,\underline{m}}(t)
= {\boldsymbol{\Psi}(\tau_0) + \sum^{\underline{m}-1}_{m=0}
\int^{\tau_{m+1}}_{\tau_m}
\mathbf{f}_{km}(s) \,\mathrm{d}{s}
+  \int^{t}_{\tau_{\underline{m}}} \mathbf{f}_{k,\underline{m}}(s) \,\mathrm{d}{s}}
\end{equation*}
and after a Taylor expansion around the point
$\big(\mathbf{u}_{k,\underline{m}-1}(s), \mathbf{u}_{k,\underline{m}-2}(\Delta(s))
\big)$, we have
\begin{align*}
\mathbf{u}_{k+1,\underline{m}}(t)
&= {\boldsymbol{\Psi}(\tau_0)
 + \sum^{\underline{m}-1}_{m=0} \int^{\tau_{m+1}}_{\tau_m}
\mathbf{f}_{km}(s) \,\mathrm{d}{s}
+ \int^{t}_{\tau_{\underline{m}}} \mathbf{f}_{k,\underline{m}-1}(s)
+ \mathbf{f}^*_{k,\underline{m}-1}(s) \,\mathrm{d}{s}} \\
&\equiv I_{\sigma}(t) + I_{\underline{m}}(t)
\end{align*}
Note that the first term in the integrand of $I_{\underline{m}}(t)$
has precisely the same functional form as the $m=\underline{m}-1$
term in the integrand of $I_{\sigma}(t)$, so these integrals may be combined
to run over $[\tau_{\underline{m}-1},t]$ with $t >\tau_{\underline{m}}$.
This demonstrates that the solution inherits the previous $\tau$-step's functional
form in the next $\tau$-step,
$[\tau_{\underline{m}},\tau_{\underline{m}+1}]$, which includes delay
structures from $\Delta_0(t)$ up to $\Delta_{m-1}(t)$.

Using Lemma \ref{FL} on the integration of ${\bf f^*}_{k,\underline{m}-1}$,
which has arguments of $\Delta_{\underline{m}}(t)$ and
$\Delta_{\underline{m}-1}(\Delta(t)) = \Delta_{\underline{m}}(t)$ respectively,
which implies \eqref{TP} holds by induction.  When $m=k$, this also
introduces a new delay structure given by $\Delta_k(t)$.  In particular,
this new delay structure's minimum power is the minimum
power on $\Delta_{k-1}(t)$ in $\mathbf{p}_{k,k-1}$ plus one.
Then \eqref{TP2} holds for the new delay terms $m=k$, so that \eqref{TP2}
holds for all $k$ by induction.
\end{proof}

 If the initial data is not constant, then it is possible that the constant
term associated with the vector field terms may change
across $\tau$-steps.  Hence, because there is no initial invariance across
$\tau$-steps in the constant term, there can be no
invariance with respect to $\tau$ in general for any coefficients.
This does not preclude using dPSM, it just makes it `trickier'.

\section{Iteration structure - formal} \label{sec:FIS}
\subsection{Basic array setup}

For DDEs, the PSM invariance result with respect to iterations is true if the
vector field is linear, however, it requires a modification
for the nonlinear case, because of the interaction between different delay structures.
In particular, lower order terms will have their
coefficients changed during each iteration.  But upon considering these
coefficients as functions of $\tau$, it will be that the coefficients
on $\tau^s$ for fixed $s \leq k$ are invariant as $k$ increases.
 In addition, tracking powers of $\Delta_m(t)$, one can ascertain whether
all terms of a given power are computed or not by the integration.

To aid in the proof for the deviating argument case, it is convenient to note
that a polynomial vector field associated with \eqref{DDEs} can
be reduced to a quadratic vector field.

\begin{lemma} \label{lem6.1}
If \eqref{DDE} is equivalent to \eqref{DDEs} with $\mathbf{f}$ polynomial
and solution $\mathbf{u}$, then there exists an
$\underline{\mathbf{f}}$ such that degree $\underline{\mathbf{f}}$ is
two and a component of which solves \eqref{DDE}.
\end{lemma}

 \begin{proof}
See \cite{CPSW}, noting that deviating arguments may be relabeled in a similar
fashion as non-deviating arguments and a change of variable for the deviating
arguments may not contribute to the size of the vector field since an evolution
line may not be required.
\end{proof}

 This result is not necessary but invoking its result streamlines the upcoming
proof of the structure between iterations
by only having to consider sums of single multiplications in the vector field.
Each multiplication can be represented by an array
with monomials from the current approximations representing a row or column.
The entries of the array would then represent
the integration of the row and column monomial multiplication.

\begin{example} \label{examp4} \rm
Let $y' = (y(t-\tau))^r$ for constant $\tau$ with $t_0 =0$.  The vector field
is not polynomial unless $r \in \mathbb{Z}^+$, so assume otherwise.
To recast the vector field for dPSM, let $u = (y(t-\tau))^r$ which implies
that $u' = r (y(t-\tau))^{r-1} y'(t-\tau)
= r u (y(t-\tau))^{-1} y'(t-\tau)$.  Note that the derivative term can be
computed during Picard's iteration from the current approximation
for $y$ (after delaying it and taking a derivative), hence it does not require
its own evolution line.  This can be rewritten in terms of letters
already in use by noting that
$(y(t-\tau))' = y'(t-\tau) = (y(t-2\tau))^r = u(t-\tau) \equiv u_*$
upon using the DDE.  The $u'$ line can
now be updated to $u' = r u u_* y_*^{-1}$, where the asterisk will denote
in general delaying the current argument by $\tau$.

The $y_*^{-1}$ term needs to be in polynomial form still, so to this end,
let $v = y_*^{-1}$, so that $v' = -1 y_*^{-2} y_*' = -v^2 y_*'= -v^2 u_*$
and the $u'$ line becomes polynomial as well.  With these change of variables
in place, the original DDE can be written as a two component system
\begin{equation} \label{TCS}
\begin{gathered}
u' = r u v u_* \\
v' = -v^2 u_*
\end{gathered}
\end{equation}
and then integrating $u$ will recover the original variable $y$.
However, this system is cubic.  A quadratic one can be made
to appear, at the price of moving to a three component system, with the
addition of $w = vu_*$ which implies $w' = -w^2 + vu_*'$.
this updates \eqref{TCS} to the following system
\begin{equation} \label{TCS2}
\begin{gathered}
u' =r u w \\
v' = -v w \\
w' = -w^2 + v u_*'
\end{gathered}
\end{equation}
which is quadratic.  Note that if one were to compute by hand, \eqref{TCS}
is preferable to \eqref{TCS2}, but for the proofs,
\eqref{TCS2} is preferred.
\end{example}

 Using Theorem \ref{FT}, let
$\mathbf{u}_{k,\underline{m}}(t) = \sum^{\underline{m}}_{m=0}\mathbf{p}_{km}(t)$.
It is convenient to arrange
each component of $\mathbf{u}_k$ as an ordered list rather than an array across
$i$ and $m$.  To this end, given $u^l_k$, suppress
the component dependence, and consider
$\mathbf{U}^l_k = [P_{k0}(t), \ldots, P_{k,k-1}(t)]$ where
$P_{k,\underline{m}}(t) = [\Delta_{\underline{m}}(t),
\ldots,\Delta^d_{\underline{m}}(t)]$,
where coefficient information will be suppressed, to focus on tracking
powers of $\Delta_{m}(t)$.

Because there will be two components to track in the multiplications of a
quadratic vector field, the indices $m$ and $\underline{m}$
will now be used as independent indices tracking the delay structures.
Let $u^{l_1}_k$ generate $\mathbf{U}^{l_1}_k = (P_{km})_i$
and $u^{l_2}_k$ generate $\mathbf{U}^{l_2}_k = (Q_{k,\underline{m}})_j$,
an array may be used to represent the quadratic terms, which
can be thought of in block form
$\mathbf{U}^{l_1}_k\mathbf{U}^{l_2}_k = (P_{km})_i (Q_{k,\underline{m}})_j$.
In each entry of the array, tabulate
the integration of the corresponding terms $(P_{km})_i(Q_{k,\underline{m}})_j$
by listing the result in the form $\Delta^i_m(t)$.  The entries
of these arrays would then be scalar multiplied and summed over components to
achieve the integration of the entire vector field.

The result of Theorem \ref{FT} allows the block array when $m = k$ to be used
for integration, and this result will contain the information for the cases
when $m < k$ by simply ignoring the $\Delta_{\underline{m}}(t)$ terms
for which $\underline{m} > m$.  Assume the constant lag case for the delay,
and as an example, let component $u^{l_1}_k
= a_1 + b_1t + c_1t^2 + d_1(t-\tau)^2$ and
$u^{l_2}_k = a_2 + b_2t + c_2t^2 + d_2(t-\tau)^2$.  The array for a
nonlinear interaction term in a vector field, upon suppressing coefficients,
is given in Figure \ref{fig:reg}, which would
be used to compute the next approximation $\mathbf{u}_{k+1}$.
Ordering by exponent in each block shows
that equal powers of $t$ appear on the off diagonals in each of the main
diagonal $P_{km} Q_{k,\underline{m}}$ blocks.

\begin{figure}[ht]
\begin{center}
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{c|c|c|c|c|}
\multicolumn{1}{c}{} 
& \multicolumn{1}{c}{$1$}
&\multicolumn{1}{c}{$t$} 
&\multicolumn{1}{c}{$t^2$}
&\multicolumn{1}{c}{$(t-\tau)^2$} \\
 \cline{2-5}
 $1$ & $t$ & $\frac{1}{2}t^2$ & $\frac{1}{3}t^3$ & $\frac{1}{3}(t-\tau)^3$ \\
 \cline{2-5}
 $t$ & $\frac{1}{2}t^2$ & $\frac{1}{3}t^3$ & $\underline{t}^4$ & wing \\
 \cline{2-5}
 $t^2$ & $\frac{1}{3}t^3$ & $\underline{t}^4$ & $\underline{t}^5$ & wing \\
 \cline{2-5}
 $(t-\tau)^2$ & $\frac{1}{3}(t-\tau)^3$ & wing & wing & $\underline{(t-\tau)}^5$ \\
 \cline{2-5} 
\end{tabular}
 \\[6pt]
$$ \begin{bmatrix}
 P_{k0}Q_{k0}\;(3 \times 3) &  & P_{k0}Q_{k1}\;(3 \times 1) \\
P_{k1}Q_{k0} \;(1 \times 3) &  & P_{k1}Q_{k1}\;(1 \times 1) 
 \end{bmatrix} 
$$
\end{center}
\caption{Integration of vector field array - DDE}
\label{fig:reg} \end{figure}

The underlined terms in the array are of higher power than the next power of $t$,
which happens to be $t^3$ in the first main
diagonal block, and will not be computed.  This is due to $\Psi(t)$ being known
so that the $P_{k0} Q_{k0}$ block represents
a PSM problem, hence the PSM result applies, i.e. integration which produces
$t^k$ terms will be retained for $\mathbf{u}_k$'s
integration to obtain $\mathbf{u}_{k+1}$.

These arrays expand in two ways each iteration.  In PSM, the arrays expand
by adding a row and column, which represents the
increase in degree of the approximation by one.  In dPSM, each block in
the array adds a row and column for the same reason,
while overall the array also adds a new block row and block column,
which corresponds to the newest delay term $\Delta_k(t)$.

\subsection{Wing terms}

Associated with each main diagonal block $P_{km}Q_{km}$ with $m\neq 0$ fixed,
are off diagonal blocks, $P_{km} Q_{k,\underline{m}}$ and
$P_{k,\underline{m}} Q^{km}$ with $0 \leq \underline{m} < m$.
Ignoring the constant term multiplications, the remaining terms, referred to as
\emph{wing} terms, are a complication since Lemma \ref{FL} implies that an
integration will produce terms with the same delay as the main
diagonal block $m$, hence their contribution has to factor into when terms
should be computed.

In particular, because of the range of powers produced by Lemma \ref{FL},
wing terms allow the coefficients on lower powers of
$\Delta_m(t)$ to change after each iteration, which is contrary
to the PSM result.  However, this coefficient change is predictable and it
occurs through the development of a lag
dependence in the coefficients, which will be polynomial with argument
$\gamma_{m,\underline{m}} \tau = \tau \sum^{m}_{r = \underline{m}}
\sigma^{-r}$, for the delay structure given by \eqref{LDS} with $\tau_{-1}$
relabeled as $\tau$.  Looking at just the coefficients, then
a version of the PSM result holds, where the only change in coefficients in
the next iteration is the inclusion of the next power of $\tau$.

To see this, suppose the coefficients are $\tau$ dependent.
The result of Lemma \ref{FL} on the wing terms shows specifically
that they are a linear combination of powers of $\Delta_m(t)$ with powers
from $\beta+1$ to $\alpha+\beta+1$ and coefficients with
powers of $\tau$ from $\tau^\beta$ to $\tau^0$, respectively.
As an example, consider an arbitrary $P_{k1} Q_{k0}$ block,
and assume $(a_1(t-\tau)+b_1(t-\tau)^2+c_1(t-\tau)^3)$ times $(a_2t+b_2t^2+c_2t^3)$.
 Suppressing coefficients and noting
$\gamma_{m,\underline{m}} = (m-\underline{m})\tau$ for the constant lag case,
the following array is constructed in Figure \ref{fig:wing},
using the notation $(\alpha,\beta) \equiv \tau^{\alpha} (t-\tau)^{\beta}$
and the underline for terms not to be computed in the next integration.

\begin{figure}[ht]
\begin{center} 
\renewcommand{\arraystretch}{1.6}
\begin{tabular}{c|c|c|c|}
\multicolumn{1}{c}{}
&\multicolumn{1}{c}{$(t-\tau)$}
&\multicolumn{1}{c}{$(t-\tau)^2$}
&\multicolumn{1}{c}{$(t-\tau)^3$} \\
 \cline{2-4}
$t^0$ & $(0,2)$ & $(0,3)$ & $(0,4)$  \\
 \cline{2-4}
$t$ & $(1,2) + (0,3)$ & $(1,3) + (0,4)$ & $(1,4) + \underline{(0,5)}$ \\
\cline{2-4}
 $t^2$ & $(2,2) + (1,3)  + (0,4)$ 
& $(2,3) + (1,4) + \underline{(0,5)}$ 
&  $\underline{(2,4)} + \underline{(1,5)}  + \underline{(0,6)}$ \\
 \cline{2-4}
\multirow{2}{4mm}{$t^3$} 
& $(3,2) + (2,3)$ & $\underline{(3,3)} + \underline{(2,4)}$
  & $\underline{(3,4)} + \underline{(2,5)}$ \\
& $+(1,4)+\underline{(0,5)}$  & $+\underline{(1,5)}  + \underline{(0,6)}$ 
 & $+\underline{(1,6)}  + \underline{(0,7)}$ \\
 \cline{2-4}
\end{tabular}
\end{center}
\caption{Integration of vector field array - wing terms and $\tau$ dependence}
\label{fig:wing}
\end{figure}

In this array, note that terms with the same power on both $t-\tau$ as well as
 $\tau$ run along the off diagonals, but not over the
entire off diagonal.  In addition, for a fixed $m$ and based on Lemma \ref{FL},
 array entries are $\underline{m}$ invariant,
so the only change in wing terms in different $\underline{m}$ blocks are the
starting and ending indices for the rows and columns and
the actual value of the coefficient on the power of $\tau$ under consideration.
Neither of these changes across $\underline{m}$ will play
a role in the invariance theorem.

\subsection{More Examples}

Before the theorem for invariance with respect to iterations is established,
one more example is presented, which shows what can
occur under specific conditions on the vector field.

\begin{example} \label{examp5} \rm
Consider the DDE $y'(t) = y^2(t) + (y(t-\tau))^2$ with initial data $1$
over $[-\tau,0]$.  Using \eqref{MP} as in Example 3 when $z_{52}(t)$
was computed, one can show that
\[
 y_2(t) =  \begin{cases}
y_{20}(t) = 1+2t+2t^2 & t \in [0,\tau] \\
y_{21}(t) = 1+2t+2t^2 + 2(t-\tau)^2 &  t \in [\tau,T]
 \end{cases}
\]
 and using $y_{21}(t)$ and the results of all the theorems, one can quickly
 compute $y_3(t)$ via substituting $y_{21}(t)$ into the DDE and integrating
each term involving $\tau_m$ from $\tau_m$ up to $t$.  Using a double
underline to indicate the newly computed terms, i.e. the non-underlined
terms are $y_2(t)$, we have
\begin{equation}
\begin{aligned}
y_{32}(t) & =  1+2t+2t^2+\underline{\underline{\frac{8}{3}t}}{}^3 + 2(t-\tau)^2
 + \underline{\underline{\big(4+\frac{8}{3}\tau+\frac{8}{3}\tau^2\big) (t-\tau)}}{}^3 \\
& \quad + \underline{\underline{\big(\frac{4}{3} + \frac{8}{3}\tau
 +\frac{8}{3}\tau^2 \big)(t-2\tau)}}{}^3
\end{aligned}
\end{equation}
 where the coefficients that have developed a $\tau$ dependence have
retained terms up to $\tau^2$.

The theorem to follow uses the relation $s^*_k =k+m-i$ to determine the
largest power of $\tau$ to retain on the
term of the form $\Delta_m^{i+1}(t)$.  Using $k = 2$, $m = 2$, which is the
minimum power of $\Delta_m(t)$ that
produces the wing term, which happens to be $m$ and considering the $(t-\tau)^3$
term, so that $i+1=3$, then
$s^*_k = 2+2-2 =2$, which coincides with the $\tau^2$ that was retained.
In this example, the highest power of
$\tau$ generated happened to be $2$, so there weren't higher powers to retain.
 However, if there had been, this
would imply more terms of that power still need to be computed.

Further, if the coefficient on $(t-\tau)^3$ in $y_{42}(t)$, i.e. same $\tau$-step,
next iterate, were computed it would
be $\left( 4 + \frac{8}{3}\tau + \frac{8}{3}\tau^2 +  \tau^3\right)$
because only powers up to and including
$s^*_k = k+m-i = 3+2-2 = 3$ should be computed, and the coefficients on
$\tau^2$ and $\tau$ are invariant.

This will now be contrasted against the following example.
Let $y'(t) = (y(t-\tau))^2$  with the same initial data.
Again using \eqref{MP} as in Example 3, one can compute
\[
y_2(t) = \begin{cases}
y_{20}(t) = 1+t & t \in [0,\tau] \\
y_{21}(t) = 1+t+ (t-\tau)^2 + \underline{\underline{\frac{1}{3}(t-\tau)}}^3
 & t \in [\tau,T],
 \end{cases}
\]
 where the cubic term has been underlined because normally more cubic terms
 would still need to be computed, so this term shouldn't be computed in this
iteration.  But for this vector field, those cubic terms will have zero
coefficients, and so this underlined term represents all cubic terms,
and will be retained during this iteration.  In addition,
\begin{equation}
\begin{aligned}
y_{32}(t)
& = 1 + t + (t-\tau)^2 + \frac{1}{3}(t-\tau)^3 \\
 &\quad + \big(\frac{2}{3}+\frac{2}{3}\tau\big)(t-2\tau)^3
+ \underline{\underline{\big(\frac{2}{3} +\frac{1}{6}\tau\big)(t-2\tau)}}{}^4 \\
&\quad + \underline{\underline{\frac{1}{3}(t-2\tau)}}^5
+ \underline{\underline{\frac{1}{9}(t-2\tau)}}{}^6
+ \underline{\underline{\frac{1}{63}(t-2\tau)}}{}^7 
\end{aligned}
 \end{equation}
 where the underlines are used for the same reason as before.
Note that looking at the $(t-2\tau)^3$ term,
$k = 2$, $m=2$ and $i+1=3$, so $k+m-i = 2$ which does not match
the term $\tau^1$.  Here, constant initial
data and the lack of $t$ arguments in the vector field produce only $t-\tau$
terms, and none of higher power.
Hence the assumptions that went into forming $k+m-i$ above need to be revisited,
in particular, $k$ should
be replaced by $k-1$ here.
\end{example}

\section{dPSM results - II} \label{sec:RES}

 The next theorem is essentially the invariance of coefficients idea from
Theorem 4.1 but with respect to $k$ rather than with
respect to $m$ with consideration of a $\tau$ dependence.
The theorem holds if $\Psi$ is not constant, but in light of Theorem \ref{FT},
only in a fixed $\tau$-step.

The fastest propagation occurs when the vector fields have an additional condition
placed on them.  A \emph{mixing} vector field is fully delayed and has in each
component, either a term of the form $u(t)v(\Delta(t))$, otherwise terms of the
form $u(t)v(t) + w(\Delta(t))x(\Delta(t))$, where $u,v,w$ and $x$ are any
components, some or all of which may be the same.  Note that the first
vector field in Example 5 falls into this category, while the second one does not.

 \begin{theorem}  \label{IAI}
Using $\Delta(t)$ as in \eqref{LDS} and mixing $\mathbf{f}$, then the solution
to \eqref{DDEs} has the form \eqref{TP}
and \eqref{TP2}, where the coefficients satisfy
\[
\mathbf{a}^i_{km} = \sum^{s_k}_{s=0} \underline{\mathbf{a}}^{i,s}_{km} \tau^s
\]
for some $s_k > 0$.  In addition, for fixed $k$ and fixed $i \leq k$ and every
 $-1 \leq m \leq \underline{m}_k$, defined in Theorem
\ref{FT}, we have
\begin{equation} \underline{\mathbf{a}}^{i,s}_{k+n,m}
 = \underline{\mathbf{a}}^{i,s}_{km} \label{TC}
\end{equation}
for each $s \leq s^* \equiv k+m-i$ and every $n>0$.
\end{theorem}

\begin{proof}
  Consider fusing the ordered lists $[\mathbf{u}_k(t),\mathbf{u}_k(\Delta(t))]$
so that the delay terms in the vector field
can be distinguished via the component index: $u^l_k(t)$ represents
the $(l-L-1)$th derivative of $\mathbf{u}_k$ evaluated at $\Delta(t)$
if $L < l \leq 2L$.  Given a solution to \eqref{DDEs}, consider the quadratic
vector field for the $(k+1)$th iteration and
suppressing time dependence, we have
\begin{equation*}
{Du^{l}_{k+1}} = {\sum^{2L}_{l_1=1}\sum^{2L}_{l_2=1} c^l_{l_1l_2} u^{l_1}_k
 u^{l_2}_k}
\end{equation*}
The indices $m$ and $\underline{m}$ will now denote independent indices for
the delay structure in each component.
There is then the need to relabel $\underline{m}$, the index for the $\tau$-step,
which is now denoted by $n$.  For $n<k$,
\eqref{MP} produces $\{u^{l}_{k,n}\}^{2L}_{l_1=1}$, polynomials of the form
\eqref{TP} and \eqref{TP2}.
\begin{equation} \label{VFgen}
\begin{aligned}
u^l_{k+1,n}
& =u^l_{kn}(\tau_0) + \sum^{n-1}_{m=0} \int^{\tau_{m+1}}_{\tau_m}
\sum^{2L}_{l_1=1}
\sum^{2L}_{l_2=1} c^l_{l_1l_2} u^{l_1}_{km} u^{l_2}_{km} \,\mathrm{d}{s} \\
&\quad + \int^t_{\tau_n} c^l_{l_1l_2} u^{l_1}_{kn}u^{l_2}_{kn} \,\mathrm{d}{s}
\end{aligned} \end{equation}
 Decompose further with
\begin{equation} \label{full}
u^{l}_{kn} = \sum^{n_k}_{m=-1} p^l_{km} = \sum^{n_k}_{m=-1}
\Big( \sum^{k-1}_{i=i^*}
a^{il}_{km} \Delta^i_m + \sum^{d}_{i=k} a^{il}_{km} \Delta^i_m \Big)
\end{equation}
where $d$ is the largest power produced by integrating the vector field
and $i^* = n+1$ if $l \leq L$ and $i^*=n$ if $l>L$.
In particular, if $n=-1$, then $d=0$ and both sums collapse to constant terms.
 Integrands in \eqref{VFgen} are a linear combination
of function multiplications.  Substituting \eqref{full} into \eqref{VFgen}
shows these multiplications each have the following form:
\begin{equation} \label{VFA}
\begin{aligned}
&\sum^{n_k}_{m=-1} \sum^{\underline{n}_k}_{\underline{m}=-1}
\Big(\sum^{k-1}_{i=i^*} a^{il_1}_{km}\Delta^i_m \sum^{k-1}_{\underline{i}=i^*}
a^{\underline{i}l_2}_{k,\underline{m}} \Delta^{\underline{i}}_{\underline{m}}\Big) \\
&+ \sum^{n_k}_{m=-1} \sum^{\underline{n}_k}_{\underline{m}=-1}
\Big( \sum^d_{i=k}a^{il_1}_{km} \Delta^i_m \sum^d_{\underline{i}=k}
 a^{\underline{i}l_2}_{k,\underline{m}} \Delta^{\underline{i}}_{\underline{m}}  \Big)  \\
&+ \sum^{n_k}_{m=-1} \sum^{\underline{n}_k}_{\underline{m}=-1}
\Big(\sum^d_{i=k} a^{il_1}_{km} \Delta^i_m \sum^{k-1}_{\underline{i}=i^*}
a^{\underline{i}l_2}_{k,\underline{m}} \Delta^{\underline{i}}_{\underline{m}} \Big)\\
&+ \sum^{n_k}_{m=-1} \sum^{\underline{n}_k}_{\underline{m}=-1}
\Big(\sum^{k-1}_{i=i^*} a^{il_1}_{km} \Delta^i_m \sum^d_{\underline{i}=k}
 a^{\underline{i}l_2}_{k,\underline{m}} \Delta^{\underline{i}}_{\underline{m}} \Big) \\
&\equiv \sum_1 + \sum_2 + \sum_3 + \sum_4
\end{aligned}
\end{equation}
where for simplicity $a^{il}_{km}$ is taken to be zero if $m=0$ and $l>L$.

Intuitively, each individual $l_1$ and $l_2$ will produce a block array of
the form $(P^{l_1}_{km})_i (P^{l_2}_{k,\underline{m}})_{\underline{i}}$.
In this sense, $m$ and $\underline{m}$ are indices for the block form of
the array, and $i$ and $\underline{i}$ are local indices inside a particular block.

The multiplication of the sums in the terms of $\sum_1$ demonstrate that the
previous iteration's powers of $\Delta_m(t)$ contribute to the next
iteration.  The multiplication in the terms of $\sum_2$ produces terms with
powers of $\Delta_m(t)$ at least $2k+1$ after integrating
and should not be computed in the next integration since terms of power
$2k,2k-1, \dots, k+1$ are not yet present in the approximations
to contribute.  This also applies to powers above $k$ in $\sum_1$.

The multiplications from $\sum_3$ and $\sum_4$ are wing terms, and depending
on whether $m > \underline{m}$ or $m < \underline{m}$, one set will
contribute $\Delta^i_m(t)$ with $i \geq k$ and will not be computed,
while the other set due to Lemma \ref{FL}, will contribute $\Delta^i_m(t)$
with $i = m+1, \ldots, k$.  This implies that coefficients for lower powers
in general are not invariant with respect to iterations, however, they
can be shown invariant with respect to iterations when considered as polynomials
of $\tau$.

To see this, denote the coefficient on $\Delta^{i+1}_m$, with $i \leq k$, in the
$l$ component of $\mathbf{u}_k$ by $a^{i+1,l}_{km}$.
This coefficient may be a polynomial in $\tau$, and is determined by summing
up the coefficients on all $\Delta^{i+1}_m$ terms in
the arrays for each $(l_1,l_2)$.  Since $i$ is considered fixed,
introduce $I$, $J$ as independent indices for powers of $\Delta_m$.
In each array, these sums have the form
\begin{equation} \label{UP}
\begin{aligned}
&a^{i+1,l}_{k+1,m} \\
& = \sum^{2L}_{l_1=1} \sum^{2L}_{l_2=1}
 \Big[ (i+1)^{-1} \sum_{I+J} \sum_{=i} a^{Il_1}_{km} a^{Jl_2}_{km} \\
&\quad + \sum^{m-1}_{\underline{m}=-1}
\Big( \sum^i_{J=m} \sum^k_{I=i-J} a^{Il_1}_{k,\underline{m}} a^{Jl_2}_{km}
 \gamma \tau^{IJ}
 + \sum^i_{I=m} \sum^k_{J=i-I} a^{Il_1}_{km} a^{Jl_2}_{k,\underline{m}}
 \gamma \tau^{IJ}\Big) \Big]
\end{aligned}
\end{equation}
where $\gamma \tau^{IJ} \equiv \gamma_{m\underline{m}}\tau^{I+J-i}$.
 Note that $\gamma_{m,\underline{m}}$ is constant with respect to
$\tau$ and independent of $k$.  The terms in the first line come out of the
appropriate main diagonal block and the second line has all
the wing terms.  Note also that the indices have been relabeled so that $m$
is the larger index, while $\underline{m}$ is the smaller one

Consider now a future integration for the same term $\Delta^{i+1}_m(t)$
using $\mathbf{u}_{k+n}$,
\begin{align*}
a^{i+1,l}_{k+n+1,m}
&= \sum^{2L}_{l_1=1} \sum^{2L}_{l_2=1} \Big[ (i+1)^{-1} \sum_{I+J}
\sum_{=i} a^{Il_1}_{k+n,m} a^{Jl_2}_{k+n,m} \\
&\quad + \sum^{m}_{\underline{m}=-1}
\Big( \sum^i_{J=m} \sum^{k+n}_{I=i-J} a^{Il_1}_{k+n,\underline{m}} a^{Jl_2}_{k+n,m}
 \gamma \tau^{IJ} \\
&\quad + \sum^i_{I=m} \sum^{k+n}_{J=i-I} a^{Il_1}_{k+n,m} a^{Jl_2}_{k+n,\underline{m}}
 \gamma \tau^{IJ}\Big) \Big]
\end{align*}
and upon breaking off the powers above $k$, we have
\begin{equation} \label{KB}
\begin{aligned}
&a^{i+1,l}_{k+n+1,m}\\
&= \sum^{2L}_{l_1=1} \sum^{2L}_{l_2=1}
 \Big[(i+1)^{-1} \sum_{I+J}\sum_{=i} a^{Il_1}_{k+n,m} a^{Jl_2}_{k+n,m} \\
& \quad + \sum^{m}_{\underline{m}=-1}
 \Big( \sum^i_{J=m} \sum^k_{I=i-J} a^{Il_2}_{k+n,\underline{m}} a^{Jl_2}_{k+n,m}
 \gamma \tau^{IJ}
 + \sum^i_{I=m} \sum^k_{J=i-I} a^{Il_1}_{k+n,m} a^{Jl_2}_{k+n,\underline{m}}
 \gamma \tau^{IJ}  \\
& \quad  + \sum^i_{J=m} \sum^{k+n}_{I=k+1} a^{Il_1}_{k+n,\underline{m}}
 a^{Jl_2}_{k+n,m} \gamma \tau^{IJ}
+ \sum^i_{I=m} \sum^{k+n}_{J=k+1} a^{Il_1}_{k+n,m} a^{Jl_2}_{k+n,\underline{m}}
 \gamma \tau^{IJ} \Big) \Big]
\end{aligned}
\end{equation}
For notational convenience, the double sum over $l_1$ and $l_2$ will be suppressed
along with $l_1$ and $l_2$ dependence, along
with denoting $a^I_{km} \equiv a^{Il_1}_{km}$ and $b^J_{km} \equiv a^{Jl_2}_{km}$.
Now let $k=0$, which implies that $m$, $i$ and
$s = 0$ in the theorem statement.  Theorem \eqref{FT} shows that the theorem
statement holds for $k=0$.

If the theorem statement holds for arbitrary $k$, i.e. $i \leq k$ and
$s \leq s^*$, then
\begin{equation}
a^I_{k+n,m} = a^I_{km} + \sum^{d}_{s=s^*+1} \underline{a}^{Is}_{k+n,m}
\tau^s \equiv a^{I}_{km} + p^a_{s^*+1}\label{CO}
\end{equation}
with a similar statement for $b^J_{k+n,m}$.  Inserting \eqref{CO}
into \eqref{KB} and expanding, we have
\begin{align}
&(i+1)^{-1} \sum_{I+J}\sum_{=i}
 \bigg(a^I_{km} b^J_{km} + p^a_{s^*+1}b^J_{km} + p^b_{s^*+1}
 a^I_{km} +  p^a_{s^*+1}p^b_{s^*+1}  \nonumber \\
&+ \sum^{m-1}_{\underline{m}=-1} \Big( \sum^i_{J=m} \sum^k_{I=i-J}
\Big(a^I_{k,\underline{m}} b^J_{km} + p^a_{s^*+1}b^J_{km}
+ p^b_{s^*+1}a^I_{k,\underline{m}} +  p^a_{s^*+1}p^b_{s^*+1} \Big) \gamma \tau^{IJ}
\nonumber \\
&+ \sum^i_{I=m} \sum^k_{J=i-I} \Big(a^I_{km} b^J_{k,\underline{m}}
+ p^a_{s^*+1}b^J_{k,\underline{m}} + p^b_{s^*+1}
 a^I_{km} + p^a_{s^*+1} p^b_{s^*+1} \Big) \gamma \tau^{IJ} \nonumber \\
& + \sum^i_{J=m} \sum^{k+n}_{I=k+1} \Big(a^I_{k,\underline{m}} b^J_{km}
 + p^a_{s^*+1}b^J_{km} + p^b_{s^*+1}
 a^I_{k,\underline{m}} + p^a_{s^*+1}p^b_{s^*+1} \Big) \gamma \tau^{IJ} \nonumber \\
&+ \sum^i_{I=m} \sum^{k+n}_{J=k+1} \Big(a^I_{km} b^J_{k,\underline{m}}
 + p^a_{s^*+1}b^J_{k,\underline{m}} + p^b_{s^*+1} a^I_{km}
  + p^a_{s^*+1}p^b_{s^*+1} \Big) \gamma \tau^{IJ} \bigg) \label{COEFF}
\end{align}
and note that the first multiplication in the first, second and third lines
of \eqref{COEFF} are the same terms from \eqref{UP}.  In the
second and third terms in the first line of \eqref{CO}, note that $a$ and $b$
 may have constant terms.  It follows that these terms will
have a minimum power of $\tau$ given by the minimum power in $p_{s^*+1}$,
which is $s^*+1$ and hence, they may be written
in the form $P(\tau) \tau^{s^*+1}$ where $P$ is a polynomial with a constant term.
 All the second and third terms in lines two
through five have a larger power of $\tau$ than $s^*+1$ and are thus considered
higher order.  This is also true for all the terms of
the form $p^a_{s^*+1}p^b_{s^*+1}$.

So, summing over $l_1$ and $l_2$ and explicitly denoting the lowest power of
$\tau$ in the first three lines of \eqref{COEFF} along with the lowest power of
$\tau$ in the last two lines, we have
\begin{equation}
\begin{aligned}
a^{i+1}_{k+n+1,m}  &= a^{i+1}_{k+1,m} + P \tau^{k+m-i} + G(\tau) \\
&\quad + \sum^{2L}_{l_1=1} \sum^{2L}_{l_2=1} c^l_{l_1l_2}
 \sum^{m-1}_{\underline{m}=m^*} \Big( \sum^i_{J=m} \sum^{k+n}_{I=k+1}
 a^{Il_1}_{k,\underline{m}} a^{Jl_2}_{km} \gamma \tau^{IJ} \\
&\quad + \sum^i_{I=m} \sum^{k+n}_{J=k+1} a^{Il_1}_{km} a^{Jl_2}_{k,\underline{m}}
 \gamma \tau^{IJ}\Big)
\end{aligned}\label{LOT}
\end{equation}
where $P$ is a polynomial with respect to $\tau$ with a constant term and
$G(\tau)$ contains the higher order terms.  In addition, both
$a^{il_1}_{km}$ and $a^{il_2}_{km}$ may have constant terms, so that the
minimum power of $\tau$ in the second line of \eqref{LOT}
is $\min I+J-i = k+1+m-i = s^*+1$ over both sums.
Hence, $\underline{a}^{i+1,s}_{k+n+1,m} = \underline{a}^{i+1,s}_{k+1,m}$
if $s \leq k+m-i-1$
and so \eqref{TP} and \eqref{TP2} both hold for $k+1$, and thus all $k$.
\end{proof}

Note that both powers of $\Delta_m(t)$ along with powers of $\tau$ from
Lemma \ref{FL} are constant along off diagonals in each block array.
Hence the main diagonal blocks dictate keeping powers of $\Delta_m(t)$
up to $i \leq k+1$, and then keeping powers of $\tau$ up to $k+m-i$.

 PSM also provides an explicit a priori error bound which does not involve
derivatives of the vector field, \cite{WWSPC}.
Note that over $[\tau_{m-1},\tau_m]$, the structure in Theorem \ref{FT}
can be expanded in a series around the single center $\tau_{m-1}$.
 Using this representation, the error estimate for PSM may be extended
to the deviating argument case.

Some notation will now be introduced and to facilitate, there is a strong
overlap with the notation in \cite{WWSPC}.  Denote
by $\|\mathbf{v}\|_{\infty} = \max_l v^l$, the maximum over the components of
$\mathbf{v}$.  The vector field will be polynomial and
written compactly as
\[
\mathbf{f}(\mathbf{u},{\bf u^*}) = \sum^{d_\mathbf{f}}_{{\bf i}=0}
\sum^{d_\mathbf{f}}_{{\bf j}=0} \textbf{B}_{\bf ij}
\mathbf{u}^{\bf i} \mathbf{u}^{\bf j}_*
\]
where the sum over the ordered list of indices implies a sum over each
index in the list ranging from $0$ to $d_\mathbf{f}$,
the exponentiation between ordered lists is componentwise and the asterisk
denotes the delay terms.   Denote by $\|{\bf B_{ij}}\|$
the maximum row sum of coefficient magnitudes, the subordinate matrix norm
to the vector norm $\|\cdot\|_{\infty}$.

If one considers \eqref{DDEs} globally, then $\|{\bf B_{ij}}\|$ would be the
norm of the vector field.  There is also occasion to consider
the local vector field for the ODE in $[\tau_m,\tau_{m+1}]$ with the delay info
considered `known' and hence, the norm of the vector
field would need to incorporate the delay information into the coefficient matrix.
Dub the local ODE as $\mathbf{f}_*(\mathbf{u})$.

\begin{theorem} For finite $m \geq 0$, the expansion
$\mathbf{u}_{km}(t) = \sum^m_{\underline{m}=-1} \mathbf{p}_{k,\underline{m}}(t)$
generated by \eqref{MP} is such that
\begin{equation} \label{EB}
\|\mathbf{u}(t) - \mathbf{u}_{km}(t)\|_{\infty}
\leq C_{km}  \Big(F(t-\tau_m;\|{\bf B_{ij}}\|_{\infty},d_{\mathbf{f}})
- \sum^k_{q=0} z_q (t-\tau_m)^q \Big)
\end{equation}
 when $t \in [\tau_m,T]$ with $d_{\mathbf{f}}+1$ being the vector field's
degree in \eqref{DDEs} and constant $C_{km}$
dependent on $\|\boldsymbol{\Psi}\|_{\infty}$, the norm of the initial
data along with
\[
F(t;a,b) = \exp(at), \quad b=0 \quad \text{and} \quad
F(t;a,b) = \frac{1}{(1-abt)^{1/b}}, \quad b\geq1
\]
The sequence elements $z_q$ are the expansion coefficients of $F$ which
solves $z'(t) = a z^b$.
\end{theorem}

 This result holds for non-constant initial data.

\begin{proof}
Approximating \eqref{DDEs} via \eqref{MP}, note that in each $\tau$-step,
the DDE reduces to an ODE with non-constant coefficients and/or non-homogeneous
terms involving the solution from the previous $\tau$-step.

To recover an ODE, the vector field must be modified to include the delay
information in its coefficients.   Summing
$\textbf{B}_{ij} \mathbf{u}_*^{j}$ over ${j}$ would yield coefficients
for the ODE $\mathbf{f}_*(\mathbf{u})$ over the
particular $\tau$-step.   Noting that \eqref{PI} holds in the limit, we have
\begin{equation} \label{UB}
\mathbf{u}_*(t) \leq \max_{[0,t]} {\bf u_*} \leq \max_{[0,t]}
\mathbf{u} \leq M_{\Psi}+ t M_f
\end{equation}
using the notation of Proposition \ref{bc}, and the coefficient matrix
satisfies the bound
\[
\|\mathbf{f}_*\| \leq \|\textbf{B}_{ij}\| \big((\|M_{\Psi}
+ \tau_m M_f)^* \big)^{d_\mathbf{f}+1} \equiv \|\mathbf{f}_m\|
\]
where the asterisk indicates $\max \{\cdot,1\}$.  Hence, \eqref{DDEs}
becomes the ODE initial value problem
$\mathbf{u}' = \mathbf{f}_*(\mathbf{u})$ with initial data
$\mathbf{u}(\tau_{m}) = \mathbf{u}_{km} (0)$, to which the result from
\cite{WWSPC} can be applied.   For convenience, $\|\cdot\|_{\infty}$ is shortened
to $\|\cdot\|$, $F(\cdot;\|\mathbf{f}_m\|,d_\mathbf{f}+1)$ is shortened to
$F_m(\cdot)$ and $\|\mathbf{e}_{km}(t)\| \equiv \|\mathbf{u}(t) - \mathbf{u}_{km}(t) \|$.  This yields
\begin{equation} \label{ER}
\|\mathbf{e}_{km}(t)\| \leq \|\mathbf{u}^*_{k,m-1}(\tau_m)\|
\Big( F_m(t-\tau_m) - \sum^k_{q=0} z_{qm} (t-\tau_m)^q \Big)
\end{equation}
where the asterisk again indicates $\max\{\cdot,1\}$ and $z_{qm}$
are the expansion coefficients of $F_m$ which solves
$z'(t) = \|\mathbf{f}_m\| z^{d_\mathbf{f}+1}$.  Using the fact that
$\|\mathbf{e}_{km}(t)\| \geq 0$  along with the substitution
 $\|\mathbf{u}^*_{k,m-1}(\tau_m)\| = \|\mathbf{u}(\tau_m)-
\mathbf{e}_{k,m-1}(\tau_m)\|$, we have
\begin{align*}
\|\mathbf{u}^*_{k,m-1}(\tau_m)\|
&\leq \max \{ \|\mathbf{u}(\tau_m)\| + \|\mathbf{e}_{k,m-1}(\tau_m)\|,1
+{\|\mathbf{e}_{k,m-1}(\tau_m)\|}\} \\
 & \leq \max \{ \|\mathbf{u}(\tau_m)\|,1\} + \|\mathbf{e}_{k,-1}(\tau_m) \|
 \end{align*}
Using \eqref{PI} in the limit as before to bound the solution, then
\eqref{ER} becomes
\begin{equation}
\begin{aligned}
&\|\mathbf{e}_{km}(t)\| \\
&\leq \big((M_{\Psi} + \tau_m M_f)^* + \|\mathbf{e}_{k,m-1}(\tau_m)\|\big)
\Big( F_m(t-\tau_m) - \sum^k_{q=0} z_{qm} (t-\tau_m)^q \Big)
\end{aligned} \label{EBb}
\end{equation}
and \eqref{EBb} may then be continued to
\begin{equation*}
\|\mathbf{e}_{km}(t)\| \leq C_{km}
\Big( F_m(t-\tau_m) - \sum^k_{q=0} z_{qm} (t-\tau_n)^q \Big)
\end{equation*}
which is \eqref{EB}, where
\[
 C_{km} \equiv \left(M_{\Psi} + \tau_m M_f \right)^*
\sum^m_{\underline{m}=0} \prod^m_{n=\underline{m}}
\Big( F_n(\tau_{n+1}-\tau_n) - \sum^k_{q=0} z_{qn} (\tau_{n+1}-\tau_n)^q \Big)
\]
 Note that a slightly less tight, but computationally nicer bound exists
for the basic PSM result which
could be adapted here, see \cite{WWSPC}.
\end{proof}

\section{General delay structure} \label{sec:extra}

There is a logical difficulty extending Proposition 3.1 to the state dependent
delay case due to the changing nature of $\Delta$
with respect to iterations.  Hence, the discussion of the approximation of
this case via dPSM is heuristic. With regards to \eqref{TP},
note that $\Delta$ can only known approximately since $\mathbf{u}$ would only
be known approximately.   If one imagines a sequence
of approximations $\Delta_k$ to $\Delta$, these could differ in functional
form each iteration, or possibly only in certain parameters,
for example $\sigma$ and/or $\tau = \tau_{-1}$ in the case of \eqref{LDS}.

If a problem starts in the solution's basin of attraction so that the basic
iteration would evolve to the correct fixed point, then
$\Delta_{km}(t) = \Delta(t,\mathbf{u}_{km}(t))$ may be computed based on the
 computation of $\mathbf{u}_{km}$.  At the very least,
one needs to store the polynomial for the $\tau$ coefficients so that upon
updating the set of $\tau_m$ at each iteration based
on the currently computed $\Delta_k(t)$ would allow updating the previously
computed coefficients so that they do not have to be
recomputed each iteration.  In other words, there would be invariance in previously
computed coefficients once $\tau_{k-1,m}$
is updated to $\tau_{km}$ based on $\Delta_{km}(t)$.

\subsection*{Acknowledgments}
 The author would like to thank Drs.\ James Sochacki and Allen Holder for useful
discussions, constructive criticisms and their donations of time and expertise
during the writing of this article.

\begin{thebibliography}{99}

\bibitem{CTH} C. T. H. Baker, C. A. H. Paul, D. R. Will\'e;
 \emph{Issues in the Numerical Solution of Evolutionary Delay Differential Equations},
Adv. Comp. Math, 3 (1995), pp. 171-196.

\bibitem{NDE} A. Bellen, N. Guglielmi, A. E. Ruehli;
 \emph{Methods for Linear Systems of Circuit Delay Differential Equations of Neutral
Type}, IEEE Transactions on Circuits and Systems - I: Fundamental Theory and
Applications, 46 (1999) 1, pp. 212-215.

\bibitem{CPSW} D. C. Carothers, G. E. Parker, J. S. Sochacki, P. G. Warne;
\emph{Some Properties of Solutions to Polynomial Systems of Equations},
Elec. Jour. of Diff. Eqn., 2005 (2005) 40, pp. 1-17.

\bibitem{CL} E. A. Coddington, N. Levinson;
\emph{Theory of Ordinary Differential Equations},
McGraw-Hill, New York, 1955

\bibitem{ELS} L. E. El'sgol'ts;
 \emph{Introduction to the Theory of Differential Equations with Deviating Arguments},
(translated by R. J. McLaughlin), Holden-Day, San Francisco, 1966

\bibitem{F} E. Fehlberg;
 \emph{Numerical Integration of Differential Equations by Power Series Expansions,
Illustrated by Physical Examples}, Technical Report NASA-TN-D-2356, NASA, (1964).

\bibitem{ISQ} V. M. Isaia;
\emph{dPSM and Polynomial Initial Data}, in preparation.

\bibitem{NOR} S. B. Norkin;
 \emph{Differential Equations of the Second Order with Retarded Argument},
Translations of Mathematical Monographs (L. J. Grimm) vol. 31, AMS, Providence, 1972

\bibitem{PS} G. E. Parker, J. S. Sochacki;
\emph{Implementing the Picard Iteration}, Neural, Parallel and Scientific
Computations, 4 (1996) 1, pp. 97-112.

\bibitem{PSPDE} G. E. Parker, J. S. Sochacki;
\emph{A Picard-Maclaurin Theorem for Initial Value PDEs}, Abstr. Appl. Anal.,
5 (2000) 1, pp. 47-63.

\bibitem{ST}  J. S. Sochacki, A. Tongen;
\emph{Exploring Polynomial Dynamical Systems: An Interesting Application of
 Power Series to Differential Equations}, Springer-Verlag, 2016.

\bibitem{SB} R. D. Stewart, W. Bair;
\emph{Spiking Neural Network Simulation: Numerical Integration
with the Parker-Sochacki Method}, Journal Computational Neuroscience, 27 (2009),
pp. 115-133.

\bibitem{WWSPC} P. G. Warne, D. A. P. Warne, J. S. Sochacki, G. E. Parker,
 D. C. Carothers;  \emph{Explicit A-Priori Error Bounds and
Adaptive Error Control for Approximation of Nonlinear Initial Value Differential
Systems}, Comput. Math. Appl., 52 (2006), pp. 1695-1710.

\end{thebibliography}


\end{document}
