\(\newcommand \ensuremath [1]{#1}\) \(\newcommand \footnote [2][]{\text {( Footnote #1 )}}\) \(\newcommand \footnotemark [1][]{\text {( Footnote #1 )}}\) \(\newcommand {\stproc }[1]{\stproca {#1_t}} \newcommand {\stproca }[1]{\left (#1\right )_{t \ge 0}} \newcommand {\leftsub }[2]{{\protect \vphantom {#2}}_{#1}{#2}} \newcommand {\RR }{\mathbb {R}} \newcommand {\NN }{\mathbb {N}} \newcommand {\stP }{\mathrm {P}} \newcommand {\stE }{\mathrm {E}} \newcommand {\stPhat }{\mathrm {\hat P}} \newcommand {\stEhat }{\mathrm {\hat E}} \newcommand {\stPup }{\stP ^\uparrow } \newcommand {\stEup }{\stE ^\uparrow } \newcommand {\stPhatup }{\stPhat ^\uparrow } \newcommand {\stPcsx }[3]{\stP _{#1} \left [ #2 \middle \vert #3 \right ]} \newcommand {\Xup }{X^\uparrow } \newcommand {\Xhatup }{\hat X^\uparrow } \newcommand {\Ych }{\check {Y}} \newcommand {\LevP }{\mathbb {P}} \newcommand {\LevE }{\mathbb {E}} \newcommand {\LevPhat }{\mathbb {\hat P}} \newcommand {\LevEhat }{\mathbb {\hat E}} \newcommand {\LevPhatup }{\LevPhat ^\uparrow } \newcommand {\LevEhatup }{\LevEhat ^\uparrow } \newcommand {\xiLS }{\xi ^{\mathrm {L}}} \newcommand {\xiCPP }{\xi ^{\mathrm {C}}} \newcommand {\xiup }{\xi ^\uparrow } \newcommand {\xihatup }{\hat \xi ^\uparrow } \newcommand {\CELS }{\CE ^{\mathrm {L}}} \newcommand {\CECPP }{\CE ^{\mathrm {C}}} \newcommand {\LD }{\pi } \newcommand {\LDCPP }{\pi ^{\mathrm {C}}} \newcommand {\LDLS }{\pi ^{\mathrm {L}}} \newcommand {\FF }{\mathscr {F}} \newcommand {\GG }{\mathscr {G}} \newcommand {\FFt }{\stproc {\FF }} \newcommand {\GGt }{\stproca {\GG _t}} \newcommand {\iu }{\mathrm {i}} \newcommand {\Indic }[1]{\Ind _{(#1)}} \newcommand {\dd }{\mathrm {d}} \newcommand {\abs }[1]{\left \lvert #1 \right \rvert } \newcommand {\for }{\qquad } \DeclareMathOperator {\sgn }{sgn} \newcommand {\rhohat }{\hat {\rho }} \newcommand {\stparamset }{\mathcal {A}} \newcommand {\LSabs }{\xi ^*} \newcommand {\jump }{\Delta } \newcommand {\taull }{{\tau \! -}} \newcommand {\Ghgsymb }{{\leftsub {2}{\mathcal {F}}}_1} \newcommand {\Ghg }[4]{\Ghgsymb (#1,#2;#3;#4)} \newcommand {\CE }{\Psi } \newcommand {\LSdrift }{\mathtt {d}} \newcommand {\Ttrans }{\mathcal {T}} \newcommand {\EsscherT }{\mathcal {E}} \newcommand {\dint }{\displaystyle \int } \newcommand {\upto }{\uparrow } \) \(\newcommand {\Ind }{\unicode {x1D7D9}} \newcommand {\protect }{} \)

HITTING DISTRIBUTIONS OF \(\alpha \)-STABLE PROCESSES VIA PATH CENSORING AND SELF-SIMILARITY

By Andreas E. Kyprianou*,§,¶, Juan Carlos Pardo†,‡ and Alexander R. Watson*,§

*University of Bath, CIMAT

AMS 2000 subject classifications: 60G52, 60G18, 60G51.

Keywords and phrases: Lévy processes, stable processes, hitting distributions, hitting probabilities, killed potential, stable processes conditioned to stay positive, positive self-similar Markov processes, Lamperti transform, Lamperti-stable processes, hypergeometric Lévy processes.

We consider two first passage problems for stable processes, not necessarily symmetric, in one dimension. We make use of a novel method of path censoring in order to deduce explicit formulas for hit- ting probabilities, hitting distributions, and a killed potential meas- ure. To do this, we describe in full detail the Wiener-Hopf factor- isation of a new Lamperti-stable-type Lévy process obtained via the Lamperti transform, in the style of recent work in this area.

1. Introduction. A Lévy process is a stochastic process issued from the origin with stationary and independent increments and càdlàg paths. If \(X: = (X_t)_{t\geq 0}\) is a one-dimensional Lévy process with law \(\stP \), then the classical Lévy-Khintchine formula states that for all \(t\geq 0\) and \(\theta \in \RR \), the characteristic exponent \(\Psi (\theta ) : = -t^{-1}\log \stE (e^{\iu \theta X_t})\) satisfies

\[ \Psi (\theta ) = \iu a\theta + \frac {1}{2}\sigma ^2\theta ^2 + \int _{\RR } (1 - e^{\iu \theta x} + \iu \theta x\Indic {|x|\leq 1})\Pi (\dd x), \]

where \(a\in \mathbb {R}\), \(\sigma \geq 0\) and \(\Pi \) is a measure (the Lévy measure) concentrated on \(\RR \setminus \{0\}\) such that \(\int _{\RR }(1\wedge x^2)\Pi (\dd x)<\infty \).

\((X,\stP )\) is said to be a (strictly) \(\alpha \)-stable process if it is a Lévy process which also satisfies the scaling property: under \(\stP \), for every \(c > 0\), the process \((cX_{t c^{-\alpha }})_{t \ge 0}\) has the same law as \(X\). It is known that \(\alpha \in (0,2]\), and the case \(\alpha = 2\) corresponds to Brownian motion, which we exclude. The Lévy-Khintchine representation of such a process is as follows: \(\sigma = 0\), and \(\Pi \) is absolutely continuous with density given by

\[ c_+ x^{-(\alpha +1)} \Indic {x > 0} + c_- \abs {x}^{-(\alpha +1)} \Indic {x < 0}, \for x \in \RR , \]

where \(c_+, c_- \ge 0\), and \(c_+ = c_-\) when \(\alpha = 1\). It holds that \(a = (c_+-c_-)/(\alpha -1)\) when \(\alpha \ne 1\), and we specify that \(a = 0\) when \(\alpha = 1\); the latter condition is a restriction which ensures that \(X\) is a symmetric process when \(\alpha = 1\), so the only \(1\)-stable process we consider is the symmetric Cauchy process.

These choices mean that, up to a multiplicative constant \(c>0\), \(X\) has the characteristic exponent

\[ \Psi (\theta ) = \begin {cases} c\abs {\theta }^\alpha (1 - \iu \beta \tan \frac {\pi \alpha }{2}\sgn \theta ) & \alpha \in (0,2)\setminus \{1\}, \\ c\abs {\theta } & \alpha = 1, \end {cases} \for \theta \in \mathbb {R}, \]

where \(\beta = (c_+- c_-)/(c_+ + c_-)\). For more details, see Sato [31, §14].

For consistency with the literature we appeal to in this article, we shall always parameterise our \(\alpha \)-stable process such that

\[ c_+ = \frac {\Gamma (\alpha +1)}{\Gamma (\alpha \rho )\Gamma (1-\alpha \rho )} \quad \text {and} \quad c_- = \frac {\Gamma (\alpha +1)}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )}, \]

where \(\rho = \stP (X_t \ge 0) = \stP (X_t > 0)\) is the positivity parameter, and \(\rhohat = 1-\rho \).

We take the point of view that the class of stable processes, with this normalisation, is parameterised by \(\alpha \) and \(\rho \); the reader will note that all the quantities above can be written in terms of these parameters. We shall restrict ourselves a little further within this class by excluding the possibility of having only one-sided jumps. Together with our assumption about the case \(\alpha = 1\), this gives us the following set of admissible parameters:

\begin{multline*} \stparamset = \bigl \{ (\alpha ,\rho ) : \alpha \in (0,1), \, \rho \in (0,1) \bigr \} \\ {} \cup \bigl \{ (\alpha ,\rho ) : \alpha \in (1,2), \, \rho \in (1-1/\alpha , 1/\alpha ) \bigr \} \cup \bigl \{ (\alpha , \rho ) = (1, 1/2) \bigr \}. \end{multline*}

After Brownian motion, \(\alpha \)-stable processes are often considered an exemplary family of processes for which many aspects of the general theory of Lévy processes can be illustrated in closed form. First passage problems, which are relatively straightforward to handle in the case of Brownian motion, become much harder in the setting of a general Lévy process on account of the inclusion of jumps. A collection of articles through the 1960s and early 1970s, appealing largely to potential analytic methods for general Markov processes, were relatively successful in handling a number of first passage problems, in particular for symmetric \(\alpha \)-stable processes in one or more dimensions. See, for example, [3, 14, 15, 26, 29] to name but a few.

However, following this cluster of activity, several decades have passed since new results on these problems have appeared. The last few years have seen a number of new, explicit first passage identities for one-dimensional \(\alpha \)-stable processes, thanks to a better understanding of the intimate relationship between the aforesaid processes and positive self-similar Markov processes. See, for example, [6, 8, 10, 20, 22].

In this paper we return to the work of Blumenthal, Getoor and Ray [3], published in 1961, which gave the law of the position of first entry of a symmetric \(\alpha \)-stable process into the unit ball. Specifically, we are interested in establishing the same law, but now for all the one-dimensional \(\alpha \)-stable processes which fall within the parameter regime \(\stparamset \); we remark that Port [26, §3.1, Remark 3] found this law for processes with one-sided jumps, which justifies our exclusion of these processes in this work. Our method is modern in the sense that we appeal to the relationship of \(\alpha \)-stable processes with certain positive self-similar Markov processes. However, there are two notable additional innovations. First, we make use of a type of path censoring. Second, we are able to describe in explicit analytical detail a non-trivial Wiener-Hopf factorisation of an auxiliary Lévy process from which the desired solution can be sourced. Moreover, as a consequence of this approach, we are able to deliver a number of additional, related identities in explicit form for \(\alpha \)-stable processes.

We now state the main results of the paper. Let \(\stP _x\) refer to the law of \(X+x\) under \(\stP \), for each \(x\in \mathbb {R}\). We introduce the first hitting time of the interval \((-1,1)\),

\[ \tau _{-1}^1 = \inf \{ t > 0 : X_t \in (-1,1) \} . \]

Note that, for \(x \notin \{-1,1\}\), \(\stP _x\bigl (X_{\tau _{-1}^1} \in (-1,1)\bigr ) = 1\) so long as \(X\) is not spectrally one-sided. However, in Proposition 1.3, we will consider a spectrally negative \(\alpha \)-stable process, for which \(X_{\tau _{-1}^1}\) may take the value \(-1\) with positive probability.

Supported by CONACYT grant 128896.

§These authors gratefully acknowledge support from the Santander Research Fund.

Corresponding Author.

  • Theorem 1.1 Let \(x > 1\). Then, when \(\alpha \in (0,1]\),

    \begin{multline*} \stP _x\bigl (X_{\tau _{-1}^1} \in \dd y, \, \tau _{-1}^1 < \infty \bigr )/\dd y \\ = \frac {\sin (\pi \alpha \rhohat )}{\pi } (x+1)^{\alpha \rho } (x-1)^{\alpha \rhohat } (1+y)^{-\alpha \rho } (1-y)^{-\alpha \rhohat } (x-y)^{-1} , \end{multline*} for \(y \in (-1,1)\). When \(\alpha \in (1,2)\),

    \begin{multline*} \stP _x(X_{\tau _{-1}^1} \in \dd y)/\dd y \\ = \frac {\sin (\pi \alpha \rhohat )}{\pi } (x+1)^{\alpha \rho } (x-1)^{\alpha \rhohat } (1+y)^{-\alpha \rho } (1-y)^{-\alpha \rhohat } (x-y)^{-1} \qquad \qquad \\ {} - (\alpha -1) \frac {\sin (\pi \alpha \rhohat )}{\pi } (1+y)^{-\alpha \rho } (1-y)^{-\alpha \rhohat } \int _1^x (t-1)^{\alpha \rhohat -1} (t+1)^{\alpha \rho -1}\, \dd t, \end{multline*} for \(y \in (-1,1)\).

When \(X\) is symmetric, Theorem 1.1 reduces immediately to Theorems B and C of [3]. Moreover, the following hitting probability can be obtained.

  • Corollary 1.2 When \(\alpha \in (0,1)\), for \(x > 1\),

    \[ \stP _x( \tau _{-1}^1 = \infty ) = \frac {\Gamma (1-\alpha \rho )}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha )} \int _0^{\frac {x-1}{x+1}} t^{\alpha \rhohat - 1} (1-t)^{-\alpha } \, \dd t . \]

This extends Corollary 2 of [3], as can be seen by differentiating and using the doubling formula [17, 8.335.2] for the gamma function.

The spectrally one-sided case can be found as the limit of Theorem 1.1, as we now explain. The first part of the coming proposition is due to Port [26], but we re-state it for the sake of clarity.

  • Proposition 1.3 Let \(\alpha \in (1,2)\), and suppose that \(X\) is spectrally negative, that is, \(\rho = 1/\alpha \). Then, the hitting distribution of \([-1,1]\) is given by

    \begin{multline*} \stP _x(X_{\tau _{-1}^{1}} \in \dd y) = \frac {\sin \pi (\alpha -1)}{\pi } (x-1)^{\alpha -1} (1-y)^{1-\alpha } (x-y)^{-1} \dd y \\ {} + \frac {\sin \pi (\alpha -1)}{\pi } \int _0^{\frac {x-1}{x+1}} t^{\alpha -2} (1-t)^{1-\alpha } \, \dd t \, \delta _{-1}(\dd y), \end{multline*} for \(x > 1\), \(y \in [-1,1]\), where \(\delta _{-1}\) is the unit point mass at \(-1\). Furthermore, the measures on \([-1,1]\) given in Theorem 1.1 converge weakly, as \(\rho \to 1/\alpha \), to the limit above.

The following killed potential is also available.

  • Theorem 1.4 Let \(\alpha \in (0,1]\), \(x > 1\) and \(y>1\). Then,

    \begin{multline*} \stE _x \int _0^{\tau _{-1}^1} \Indic {X_t \in \dd y} \, \dd t / \dd y \\ {} = \begin {cases} \dfrac {1}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \biggl (\dfrac {x-y}{2}\biggr )^{\alpha -1} \displaystyle \int _1^{\frac {1-xy}{y-x}} (t-1)^{\alpha \rho -1} (t+1)^{\alpha \rhohat -1} \, \dd t, & 1 < y < x, \\[1em] \dfrac {1}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \biggl (\dfrac {y-x}{2}\biggr )^{\alpha -1} \displaystyle \int _1^{\frac {1-xy}{x-y}} (t-1)^{\alpha \rhohat -1} (t+1)^{\alpha \rho -1} \, \dd t, & y > x. \end {cases} \end{multline*}

To obtain the potential of the previous theorem for \(x < -1\) and \(y < -1\), one may easily appeal to duality. In the case that \(x<-1\) and \(y>1\), one notes that

\begin{equation} \stE _x \int _0^{\tau _{-1}^1} \Indic {X_t \in \dd y} \, \dd t = \stE _x \stE _\Delta \int _0^{\tau _{-1}^1} \Indic {X_t \in \dd y} \, \dd t , \label {DELTA} \end{equation}

where the quantity \(\Delta \) is randomised according to the distribution of
\(X_{\tau ^+_{-1}}\Indic {X_{\tau ^+_{-1}}>1}\), with

\[ \tau ^+_{-1}= \inf \{ t > 0 : X_t > -1 \}. \]

Although the distribution of \(X_{\tau ^+_{-1}}\) is available from [30], and hence the right hand side of (1) can be written down explicitly, it does not seem to be easy to find a convenient closed form expression for the corresponding potential density.

Regarding this potential, let us finally remark that our methods give an explicit expression for this potential even when \(\alpha \in (1,2)\), but again, there does not seem to be a compact expression for the density.

A further result concerns the first passage of \(X\) into the half-line \((1,\infty )\) before hitting zero. Let

\[ \tau _1^+ = \inf \{ t > 0 : X_t > 1 \} \text { and } \tau _0 = \inf \{ t > 0 : X_t = 0 \} . \]

Recall that when \(\alpha \in (0,1]\), \(\stP _x(\tau _0 = \infty ) = 1\), while when \(\alpha \in (1,2)\), \(\stP _x(\tau _0 < \infty ) = 1\), for \(x \ne 0\). In the latter case, we can obtain a hitting probability as follows.

  • Theorem 1.5 Let \(\alpha \in (1,2)\). When \(0 < x < 1\),

    \[ \stP _x(\tau _0 < \tau _1^+) = (\alpha -1) x^{\alpha -1} \int _1^{1/x} (t-1)^{\alpha \rho -1} t^{\alpha \rhohat -1} \, \dd t . \]

    When \(x < 0\),

    \[ \stP _x(\tau _0 < \tau _1^+) = (\alpha -1) (-x)^{\alpha -1} \int _1^{1-1/x} (t-1)^{\alpha \rhohat -1} t^{\alpha \rho -1}\, \dd t . \]

It is not difficult to push Theorem 1.5 a little further to give the law of the position of first entry into \((1,\infty )\) on the event \(\{\tau ^+_1<\tau _0\}\). Indeed, by the Markov property, for \(x < 1\),

\begin{align} \stP _x(X_{\tau _1^+} \in \dd y, \, \tau _1^+ < \tau _0) &= \stP _x(X_{\tau _1^+} \in \dd y) - \stP _x(X_{\tau _1^+} \in \dd y, \tau _0 < \tau _1^+) \nonumber \\ \label {put-in-Rog} &= \stP _x(X_{\tau _1^+} \in \dd y) - \stP _x(\tau _0 < \tau _1^+) \stP _0(X_{\tau _1^+} \in \dd y). \end{align} Moreover, Rogozin [30] found that, for \(x < 1\) and \(y>1\),

\begin{equation} \stP _x(X_{\tau _1^+} \in \dd y) = \frac {\sin (\pi \alpha \rho )}{\pi } (1-x)^{\alpha \rho } (y-1)^{-\alpha \rho } (y-x)^{-1} \, \dd y. \label {Rog-first} \end{equation}

Hence substituting (3) together with the hitting probability from Theorem 1.5 into (2) yields the following corollary.

  • Corollary 1.6 Let \(\alpha \in (1,2)\). Then, when \(0 < x < 1\),

    \begin{multline*} \stP _x(X_{\tau _1^+} \in \dd y, \, \tau _1^+ < \tau _0) / \dd u \\ = \frac {\sin (\pi \alpha \rho )}{\pi } (1-x)^{\alpha \rho } (y-1)^{-\alpha \rho } (y-x)^{-1} \qquad \qquad \qquad \qquad \quad \\ {} - (\alpha -1) \frac {\sin (\pi \alpha \rho )}{\pi } x^{\alpha -1} (y-1)^{-\alpha \rho } y^{-1} \int _1^{1/x} (t-1)^{\alpha \rho -1} t^{\alpha \rhohat -1}\, \dd t, \end{multline*} for \(y>1\). When \(x < 0\),

    \begin{multline*} \stP _x(X_{\tau _1^+} \in \dd y, \, \tau _1^+ < \tau _0) / \dd y \\ = \frac {\sin (\pi \alpha \rho )}{\pi } (1-x)^{\alpha \rho } (y-1)^{-\alpha \rho } (y-x)^{-1} \qquad \qquad \qquad \qquad \qquad \qquad \quad \\ {} - (\alpha -1) \frac {\sin (\pi \alpha \rho )}{\pi } (-x)^{\alpha -1} (y-1)^{-\alpha \rho } y^{-1} \int _1^{1-1/x} (t-1)^{\alpha \rhohat -1} t^{\alpha \rho -1}\, \dd t, \end{multline*} for \(y>1\).

We conclude this section by giving an overview of the rest of the paper. In Section 2, we recall the Lamperti transform and discuss its relation to \(\alpha \)-stable processes. In Section 3, we explain the operation which gives us the path-censored \(\alpha \)-stable process \(Y\), that is to say the \(\alpha \)-stable process with the negative components of its path removed. We show that \(Y\) is a positive self-similar Markov process, and can therefore be written as the exponential of a time-changed Lévy process, say \(\xi \). We show that the Lévy process \(\xi \) can be decomposed into the sum of a compound Poisson process and a so-called Lamperti-stable process. Section 4 is dedicated to finding the distribution of the jumps of this compound Poisson component, which we then use in Section 5 to compute in explicit detail the Wiener-Hopf factorisation of \(\xi \). Finally, we make use of the explicit nature of the Wiener-Hopf factorisation in Section 6 to prove Theorems 1.1 and 1.4. There we also prove Theorem 1.5 via a connection with the process conditioned to stay positive.

2. Lamperti transform and Lamperti-stable processes.

A positive self-similar Markov process (pssMp) with self-similarity index \(\alpha > 0\) is a standard Markov process \(Y = (Y_t)_{t\geq 0}\) with filtration \(\GGt \) and probability laws \((\stP _x)_{x > 0}\), on \([0,\infty )\), which has \(0\) as an absorbing state and which satisfies the scaling property, that for every \(x, c > 0\),

\begin{equation} \label {scaling prop}\text { the law of } (cY_{t c^{-\alpha }})_{t \ge 0} \text { under } \stP _x \text { is } \stP _{cx} \text {.} \end{equation}

Here, we mean “standard” in the sense of [4], which is to say, \(\GGt \) is a complete, right-continuous filtration, and \(Y\) has càdlàg paths and is strong Markov and quasi-left-continuous.

In the seminal paper [25], Lamperti describes a one to one correspondence between pssMps and Lévy processes, which we now outline. It may be worth noting that we have presented a slightly different definition of pssMp from Lamperti; for the connection, see [34, §0].

Let \(S(t) = \int _0^t (Y_u)^{-\alpha }\, \dd u .\) This process is continuous and strictly increasing until \(Y\) reaches zero. Let \((T(s))_{s \ge 0}\) be its inverse, and define

\[ \xi _s = \log Y_{T(s)} \qquad s\geq 0. \]

Then \(\xi : = (\xi _s)_{s\geq 0}\) is a Lévy process started at \(\log x\), possibly killed at an independent exponential time; the law of the Lévy process and the rate of killing do not depend on the value of \(x\). The real-valued process \(\xi \) with probability laws \((\LevP _y)_{y \in \RR }\) is called the Lévy process associated to \(Y\), or the Lamperti transform of \(Y\).

An equivalent definition of \(S\) and \(T\), in terms of \(\xi \) instead of \(Y\), is given by taking \(T(s) = \int _0^s \exp (\alpha \xi _u)\, \dd u\) and \(S\) as its inverse. Then,

\begin{equation} \label {Lamp repr} Y_t = \exp (\xi _{S(t)}) \end{equation}

for all \(t\geq 0\), and this shows that the Lamperti transform is a bijection.

Let \(T_0 = \inf \{ t > 0: Y_t = 0 \}\) be the first hitting time of the absorbing state zero. Then the large-time behaviour of \(\xi \) can be described by the behaviour of \(Y\) at \(T_0\), as follows:

  • (i) If \(T_0 = \infty \) a.s., then \(\xi \) is unkilled and either oscillates or drifts to \(+ \infty \).

  • (ii) If \(T_0 < \infty \) and \(Y_{T_0 -} = 0\) a.s., then \(\xi \) is unkilled and drifts to \(-\infty \).

  • (iii) If \(T_0 < \infty \) and \(Y_{T_0 -} > 0\) a.s., then \(\xi \) is killed.

It is proved in [25] that the events mentioned above satisfy a zero-one law independently of \(x\), and so the three possibilites above are an exhaustive classification of pssMps.

Three concrete examples of positive self-similar Markov processes related to \(\alpha \)-stable processes are treated in Caballero and Chaumont [6]. We present here the simplest case, namely that of the \(\alpha \)-stable process absorbed at zero. To this end, let \(X\) be the \(\alpha \)-stable process as defined in the introduction, and let

\[ \tau _0^- = \inf \{ t > 0 : X_t \le 0 \} . \]

Denote by \(\LSabs \) the Lamperti transform of the pssMp \(\stproca {X_t \Indic {t < \tau _0^-}}\). Then \(\LSabs \) has Lévy density

\begin{equation} \label {LSabs density} c_+ \frac {e^x}{(e^x-1)^{\alpha +1}} \Indic {x > 0} + c_- \frac {e^x}{(1-e^x)^{\alpha +1}} \Indic {x < 0} , \end{equation}

and is killed at rate \(c_-/\alpha = \frac {\Gamma (\alpha )}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )}\).

We note here that in [6] the authors assume that \(X\) is symmetric when \(\alpha = 1\), which motivates the same assumption in this paper.

3. The censored process and its Lamperti transform.

We now describe the construction of the censored \(\alpha \)-stable process that will lie at the heart of our analysis, show that it is a pssMp and discuss its Lamperti transform.

Henceforth, \(X\), with probability laws \((\stP _x)_{x \in \RR }\), will denote the \(\alpha \)-stable process defined in the introduction. Define the occupation time of \((0,\infty )\),

\[ A_t = \int _0^t \Indic {X_s > 0} \, \dd s , \]

and let \(\gamma (t) = \inf \{ s \ge 0 : A_s > t \}\) be its right-continuous inverse. Define a process \((\Ych _t)_{ t \ge 0}\) by setting \(\Ych _t = X_{\gamma (t)}\), \(t\geq 0\). This is the process formed by erasing the negative components of \(X\) and joining up the gaps.

Write \(\FFt \) for the augmented natural filtration of \(X\), and \(\GG _t = \FF _{\gamma (t)}\), \(t \ge 0\).

  • Proposition 3.1 The process \(\Ych \) is strong Markov with respect to the filtration \(\GGt \) and satisfies the scaling property with self-similarity index \(\alpha \).

    • Proof. The strong Markov property follows directly from Rogers and Williams [28, III.21]. Establishing the scaling property is a straightforward exercise. □

We now make zero into an absorbing state. Define the stopping time

\[ T_0 = \inf \{ t > 0 : \Ych _t = 0 \} \]

and the process

\[ Y_t = \Ych _t \Indic {t < T_0} , \for t \ge 0 , \]

so that \(Y :=(Y_t)_{t\geq 0}\) is \(\Ych \) absorbed at zero. We call the process \(Y\) with probability laws \((\stP _x)_{x > 0}\) the path-censored \(\alpha \)-stable process.

  • Proposition 3.2.  The process \(Y\) is a pssMp with respect to the filtration \(\GGt \).

    • Proof. The scaling property follows from Proposition 3.1, and zero is evidently an absorbing state. It remains to show that \(Y\) is a standard process, and the only point which may be in doubt here is quasi-left-continuity. This follows from the Feller property, which in turn follows from scaling and the Feller property of \(X\). □

  • Remark 3.3 The definition of \(Y\) via time-change and stopping at zero bears some resemblance to a number of other constructions:

    • (a) Bertoin’s construction [1, §3.1] of the Lévy process conditioned to stay positive. The key difference here is that, when a negative excursion is encountered, instead of simply erasing it, [1] patches the last jump from negative to positive onto the final value of the previous positive excursion.

    • (b) Bogdan, Burdzy and Chen’s “censored stable process” for the domain \(D = (0,\infty )\); see [5], in particular Theorem 2.1 and the preceding discussion. Here the authors suppress any jumps of a symmetric \(\alpha \)-stable process \(X\) by which the process attempts to escape the domain, and kill the process if it reaches the boundary continuously.

    Both processes (a) and (b) are also pssMps with index \(\alpha \). These processes, together with the process \(Y\) just described, therefore represent three choices of how to restart an \(\alpha \)-stable process in a self-similar way after it leaves the positive half-line. We illustrate this in Figure 1.

(image)

Figure 1: The construction of three related processes from \(X\), the stable process: ‘B’ is the stable process conditioned to stay positive [1]; ‘BBC’ is the censored stable process [5]; and ‘KPW’ is the process \(Y\) in this work.

We now consider the pssMp \(Y\) more closely for different values of \(\alpha \in (0,2)\). Taking account of Bertoin [2, Proposition VIII.8] and the discussion immediately above it we know that for \(\alpha \in (0, 1]\), points are polar for \(X\). That is, \(T_0 = \infty \) a.s., and so in this case \(Y = \Ych \). Meanwhile, for \(\alpha \in (1,2)\), every point is recurrent, so \(T_0 < \infty \) a.s.. However, the process \(X\) makes infinitely many jumps across zero before hitting it. Therefore, in this case \(Y\) approaches zero continuously. In fact, it can be shown that, in this case, \(\Ych \) is the recurrent extension of \(Y\) in the spirit of [27] and [13].

Now, let \(\xi =(\xi _s)_{s\geq 0}\) be the Lamperti transform of \(Y\). That is,

\begin{equation} \xi _s = \log Y_{T(s)} , \for s \ge 0, \label {e:LT of Y} \end{equation}

where \(T\) is a time-change. As in Section 2, we will write \(\LevP _y\) for the law of \(\xi \) started at \(y \in \RR \); note that \(\LevP _y\) corresponds to \(\stP _{\exp (y)}\). The space transformation (7), together with the above comments and, for instance, the remark on p. 34 of [2], allows us to make the following distinction based on the value of \(\alpha \).

  • (i) If \(\alpha \in (0,1)\), \(T_0 = \infty \) and \(X\) (and hence \(Y\)) is transient a.s.. Therefore, \(\xi \) is unkilled and drifts to \(+ \infty \).

  • (ii) If \(\alpha = 1\), \(T_0 = \infty \) and every neighbourhood of zero is an a.s. recurrent set for \(X\), and hence also for \(Y\). Therefore, \(\xi \) is unkilled and oscillates.

  • (iii) If \(\alpha \in (1,2)\), \(T_0 < \infty \) and \(Y\) hits zero continuously. Therefore, \(\xi \) is unkilled and drifts to \(- \infty \).

Furthermore, we have the following result.

  • Proposition 3.4The Lévy process \(\xi \) is the sum of two independent Lévy processes \(\xiLS \) and \(\xiCPP \), which are characterised as follows:

    • (i) The Lévy process \(\xiLS \) has characteristic exponent

      \[ \Psi ^*(\theta ) - c_-/\alpha , \for \theta \in \RR , \]

      where \(\Psi ^*\) is the characteristic exponent of the process \(\xi ^*\) defined in Section 2. That is, \(\xiLS \) is formed by removing the independent killing from \(\xi ^*\).

    • (ii) The process \(\xiCPP \) is a compound Poisson process whose jumps occur at rate \(c_-/\alpha \).

Before beginning the proof, let us make some preparatory remarks. Let

\[ \tau = \inf \{ t > 0 : X_t < 0 \} \quad \text {and} \quad \sigma = \inf \{ t > \tau : X_t > 0 \} \]

be hitting and return times of \((-\infty ,0)\) and \((0,\infty )\) for \(X\). Note that, due to the time-change \(\gamma \), \(Y_\tau = X_\sigma \), while \(Y_{\tau -} = X_{\tau -}\). We require the following lemma.

  • Lemma 3.5The joint law of \((X_\tau ,X_\taull ,X_\sigma )\) under \(\stP _x\) is equal to that of \((x X_\tau , x X_\taull , x X_\sigma )\) under \(\stP _1\).

    • Proof. This can be shown in a straightforward way using the scaling property. □

  • Proof of Proposition 3.4. First we note that, applying the strong Markov property to the \(\GGt \)-stopping time \(\tau \), it is sufficient to study the process \((Y_t)_{t \le \tau }\).

    It is clear that the path section \((Y_t)_{ t < \tau }\) agrees with \((X_t)_{ t < \tau _0^-}\); however, rather than being killed at time \(\tau \), the process \(Y\) jumps to a positive state. Recall now that the effect of the Lamperti transform on the time \(\tau \) is to turn it into an exponential time of rate \(c_-/\alpha \) which is independent of \((\xi _s)_{s < S(\tau )}\). This immediately yields the decomposition of \(\xi \) into the sum of \(\xiLS : = (\xiLS _s)_{s\geq 0}\) and \(\xiCPP : = (\xiCPP _s)_{s\geq 0}\), where \(\xiCPP \) is a process which jumps at the times of a Poisson process with rate \(c_-/\alpha \), but whose jumps may depend on the position of \(\xi \) prior to this jump. What remains is to be shown is that the values of the jumps of \(\xiCPP \) are also independent of \(\xiLS \).

    By the remark at the beginning of the proof, it is sufficient to show that the first jump of \(\xiCPP \) is independent of the previous path of \(\xiLS \). Now, using only the independence of the jump times of \(\xiLS \) and \(\xiCPP \), we can compute

    \begin{align*} \jump Y_{\tau } := Y_{\tau } - Y_{\tau -} &= \exp (\xiLS _{S(\tau )} + \xiCPP _{S(\tau )}) - \exp (\xiLS _{S(\tau )-} + \xiCPP _{S(\tau ) -}) \\ &= \exp (\xi _{S(\tau )-}) \bigl [ \exp (\jump \xiCPP _{S(\tau )}) - 1 \bigr ] \\ &= X_{\taull } \bigl [ \exp (\jump \xiCPP _{S(\tau )}) - 1 \bigr ] , \end{align*} where \(S\) is the Lamperti time change for \(Y\), and \(\jump \xiCPP _s = \xiCPP _s - \xiCPP _{s-}\). Now,

    \[ \exp (\jump \xiCPP _{S(\tau )})= 1 + \frac {\jump Y_{\tau }}{X_{\taull }} = 1 + \frac {X_\sigma - X_{\taull }}{X_\taull } = \frac {X_\sigma }{X_\taull }. \]

    Hence, it is sufficient to show that \(\frac {X_\sigma }{X_\taull }\) is independent of \((X_t, t < \tau )\). The proof of this is essentially the same as that of part (iii) in Theorem 4 from Chaumont, Panti and Rivero [11], which we reproduce here for clarity.

    First, observe that one consequence of Lemma 3.5 is that, for \(g\) a Borel function and \(x > 0\),

    \[ \stE _x \biggl [ g\biggl ( \frac {X_\sigma }{X_\taull } \biggr )\biggr ] = \stE _1 \biggl [ g\biggl ( \frac {X_\sigma }{X_\taull } \biggr ) \biggr ] . \]

    Now, fix \(n \in \NN \), \(f\) and \(g\) Borel functions and \(s_1 < s_2 < \dotsb < s_n = t\). Then, using the Markov property and the above equality,

    \begin{align*} \stE _1 \biggl [ f(X_{s_1}, \dotsc , X_t) g\biggl ( \frac {X_\sigma }{X_\taull } \biggr ) \Indic {t < \tau } \biggr ] &= \stE _1 \biggl [ f(X_{s_1}, \dotsc , X_t) \Indic {t < \tau } \stE _{X_t} \biggl [ g\biggl ( \frac {X_\sigma }{X_\taull } \biggr ) \biggr ] \biggr ] \\ &= \stE _1 \biggl [ f(X_{s_1}, \dotsc , X_t) \Indic {t < \tau } \biggr ] \stE _{1} \biggl [ g\biggl ( \frac {X_\sigma }{X_\taull } \biggr ) \biggr ]. \end{align*} We have now shown that \(\xiLS \) and \(\xiCPP \) are independent, and this completes the proof. □

  • Remark 3.6.  Let us consider the effect of the Lamperti transform on each of the pssMps in Remark 3.3. For the process conditioned to stay positive, the associated Lévy process is the process \(\xiup \) defined in Caballero and Chaumont [6]. As regards the censored stable process in \((0,\infty )\), we can reason as in the above proposition to deduce that its Lamperti transform is simply the process \(\xiLS \) which we have just defined.

4. Jump distribution of the compound Poisson component. In this section, we express the jump distribution of \(\xiCPP \) in terms of known quantitites, and hence derive its characteristic function and density.

Before stating a necessary lemma, we establish some more notation. Let \(\hat X\) be an independent copy of the dual process \(-X\) and denote its probability laws by \((\stPhat _x)_{x \in \RR }\), and let

\[ \hat \tau = \inf \{ t > 0 : \hat X_t < 0\} . \]

Furthermore, we shall denote by \(\jump \xiCPP \) the random variable whose law is the same as the jump distribution of \(\xiCPP \).

  • Lemma 4.1 The random variable \(\exp (\jump \xiCPP )\) is equal in distribution to

    \[ \biggl ( - \frac {X_\tau }{X_{\taull }} \biggr ) \Bigl ( - \hat X_{\hat \tau } \Bigr ) , \]

    where \(X\) and \(\hat X\) are taken to be independent with respective laws \(\stP _1\) and \(\stPhat _1\).

    • Proof. In the proof of Proposition 3.4, we saw that

      \begin{equation} \exp (\jump \xiCPP _{S(\tau )}) = \frac {X_\sigma }{X_\taull } . \label {decomp expr1} \end{equation}

      Applying the Markov property, and then using Lemma 3.5 with the \(\alpha \)-self-similar process \(\hat X\), we obtain

      \begin{align*} \stP _1(X_\sigma \in \cdot \vert \FF _\tau ) &= \stPhat _{-y}(-{\hat X}_{\hat \tau } \in \cdot )\big \vert _{y = X_\tau } \\ &= \stPhat _1(y{\hat X}_{\hat \tau } \in \cdot )\big \vert _{y = X_\tau }. \end{align*} Then, by disintegration,

      \begin{align*} \stE _{1}\biggl [f\biggl (\frac {X_\sigma }{X_\taull }\biggr )\biggr ] = \stE _{1}\biggl [ \stE _{1}\biggl [ f\biggl (\frac {X_\sigma }{X_\taull }\biggr ) \bigg \vert \FF _\tau \biggr ] \biggr ] &= \stE _1 \biggl [ \int f\biggl (\frac {x}{X_\taull }\biggr ) \stPcsx {1}{X_\sigma \in \dd x}{\FF _\tau } \biggr ] \\ &= \stE _1 \biggl [ \int f\biggl (\frac {x}{X_\taull }\biggr ) \stPhat _1 \bigl [y \hat X_{\hat \tau } \in \dd x \bigr ]\big \vert _{y = X_{\tau }} \biggr ] \\ &= \stE _1\biggl [ \stEhat _1\biggl [ f\biggl (\frac {y\hat X_{\hat \tau }}{z}\biggr ) \biggr ]\bigg \vert _{y = X_{\tau }, \, z = X_{\taull }} \biggr ] \\ &= \stE _1 \otimes \stEhat _1 \biggl [ f\biggl (\frac {X_\tau \hat X_{\hat \tau }}{X_\taull }\biggr ) \biggr ] . \end{align*} Combining this with (8), we obtain that the law under \(\stP _1\) of \(\exp \bigl (\jump \xiCPP _{S(\tau )}\bigr )\) is equal to that of \(\dfrac {X_\tau \hat X_{\hat \tau }}{X_\taull }\) under \(\stP _1 \otimes \stPhat _1\), which establishes the claim. □

The characteristic function of \(\jump \xiCPP \) can now be found by rewriting the expression in Proposition 4.1 in terms of overshoots and undershoots of stable Lévy processes, whose marginal and joint laws are given in Rogozin [30] and Doney and Kyprianou [12]. Following the notation of [12], let

\[ \tau _a^+ = \inf \{ t > 0 : X_t > a \} , \]

and let \(\hat \tau _a^+\) be defined similarly for \(\hat X\).

  • Proposition 4.2.  The characteristic function of the jump distribution of \(\xiCPP \) is given by

    \begin{equation} \label {jump cf}\LevE _0 \bigl [ \exp \bigl (\iu \theta \jump \xiCPP \bigr ) \bigr ] = \frac {\sin (\pi \alpha \rho )}{\pi \Gamma (\alpha )} \Gamma (1-\alpha \rho + \iu \theta ) \Gamma (\alpha \rho - \iu \theta ) \Gamma (1 + \iu \theta ) \Gamma (\alpha - \iu \theta ). \end{equation}

    • Proof. In the course of the coming computations, we will make use several times of the beta integral,

      \[ \int _0^1 s^{x-1} (1-s)^{y-1} \, \dd s = \int _0^\infty \frac {t^{x-1}}{(1+t)^{x+y}}\, \dd t = \frac {\Gamma (x) \Gamma (y)}{\Gamma (x+y)} \text {,} \for \Re x, \, \Re y > 0. \]

      See for example [17, formulas 8.830.1–3].

      Now, for \(\theta \in \RR \),

      \begin{equation} \label {jd 1} \begin {split} \stEhat _1\biggl (-\hat X_{\hat \tau }\biggr )^{\iu \theta }&= \stE _0 \biggl ( X_{\tau _1^+} - 1 \biggr )^{\iu \theta } = \frac {\sin (\pi \alpha \rho )}{\pi } \int _0^\infty t^{\iu \theta - \alpha \rho } (1+t)^{-1}\, \dd t \\ &= \frac {\sin (\pi \alpha \rho )}{\pi } \Gamma (1-\alpha \rho + \iu \theta ) \Gamma (\alpha \rho - \iu \theta ). \end {split} \end{equation}

      Furthermore,

      \begin{equation} \label {iteration} \begin {split} \stE _1 \biggl ( \frac {X_\tau }{X_\taull } \biggr )^{\iu \theta } &= \stEhat _0 \biggl ( \frac {\hat X_{\hat \tau _1^+} - 1}{1 - \hat X_{\hat \tau _1^+ -}} \biggr )^{\iu \theta } \\ &= K \int _0^1 \int _y^\infty \int _0^\infty \frac { u^{\iu \theta } (1-y)^{\alpha \hat {\rho } -1} (v-y)^{\alpha \rho -1} } { v^{\iu \theta } (v+u)^{1+\alpha } } \, \dd u \, \dd v \, \dd y, \end {split} \end{equation}

      where \(K = \frac {\sin (\pi \alpha \rhohat )}{\pi } \frac {\Gamma (\alpha +1)}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )}\). For the innermost integral above we have

      \[ \int _0^\infty \frac {u^{\iu \theta }}{(u+v)^{1+\alpha }}\, \dd u \overset {w=u/v}{=} v^{\iu \theta - \alpha } \int _0^\infty \frac {w^{\iu \theta }}{(1+w)^{1+\alpha }}\, \dd w = v^{\iu \theta - \alpha } \frac {\Gamma (\iu \theta + 1)\Gamma (\alpha - \iu \theta )}{\Gamma (\alpha +1)} . \]

      The next iterated integral in (11) becomes, substituting \(z=v/y-1\),

      \[ \int _y^\infty v^{-\alpha } (v-y)^{\alpha \rho - 1}\, \dd v = y^{-\alpha \rhohat } \int _0^\infty \frac {z^{\alpha \rho -1}}{(1+z)^\alpha } \, \dd z = y^{-\alpha \rhohat } \frac {\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )}{\Gamma (\alpha )} , \]

      and finally it remains to calculate

      \[ \int _0^1 y^{-\alpha \rhohat }(1-y)^{\alpha \rhohat -1} \, \dd y = \Gamma (1-\alpha \rhohat )\Gamma (\alpha \rhohat ) . \]

      Multiplying together these expressions and using the reflection identity
      \(\Gamma (x)\Gamma (1-x) = \pi /\sin (\pi x)\), we obtain

      \begin{equation} \stE _1 \biggl ( - \frac {X_{\tau }}{X_{\tau \! -}} \biggr )^{\iu \theta } = \frac {\Gamma (\iu \theta + 1)\Gamma (\alpha - \iu \theta )}{\Gamma (\alpha )} . \label {jd 2} \end{equation}

      The result now follows from Lemma 4.1 by multiplying (10) and (12) together. □

  • Remark 4.3.  The recent work of Chaumont, Panti and Rivero [11] on the so-called Lamperti-Kiu processes can be applied to give the same result. The quantity \(\jump \xiCPP \) in the present work corresponds to the independent sum \(\xi ^-_\zeta + U^+ + U^-\) in that paper, where \(U^+\) and \(U^-\) are “log-Pareto” random variables and \(\xi ^-\) is the Lamperti-stable process corresponding to \(\hat X\) absorbed below zero; see [11, Corollary 11] for details. It is straightforward to show that the characteristic function of this sum is equal to the right-hand side of (9).

It is now possible to deduce the density of the jump distribution from its characteristic function. By substituting on the left and using the beta integral, it can be shown that

\begin{align*} \int _{-\infty }^{\infty } e^{\iu \theta x}\, \alpha e^x (1+e^x)^{-(\alpha + 1)}\, \dd x &= \frac {\Gamma (1+\iu \theta ) \Gamma (\alpha - \iu \theta )}{\Gamma (\alpha )} , \\ \int _{-\infty }^{\infty } e^{\iu \theta x}\, \frac {\sin (\pi \alpha \rho )}{\pi } e^{(1-\alpha \rho )x} (1+e^x)^{-1}\, \dd x &= \frac {\sin (\pi \alpha \rho )}{\pi } \Gamma (\alpha \rho -\iu \theta ) \Gamma (1-\alpha \rho +\iu \theta ) , \end{align*} and so the density of \(\jump \xiCPP \) can be seen as the convolution of these two functions. Moreover, it is even possible to calculate this convolution directly:

\begin{align} &\LevP _0\bigl (\jump \xiCPP \in \dd x\bigr )/\dd x \nonumber \\ &{} = \frac {\alpha }{\Gamma (\alpha \rho )\Gamma (1-\alpha \rho )} \int _{-\infty }^{\infty } e^u (1+e^u)^{-(\alpha +1)} e^{(1-\alpha \rho )(x-u)} (1+e^{x-u})^{-1} \, \dd u \nonumber \\ &{} = \frac {\alpha }{\Gamma (\alpha \rho )\Gamma (1-\alpha \rho )} e^{-\alpha \rho x} \int _0^\infty t^{\alpha \rho } (1+t)^{-(\alpha +1)} (te^{-x} + 1)^{-1}\, \dd t \nonumber \\ &{} = \frac {\alpha \Gamma (\alpha \rho +1) \Gamma (\alpha \rhohat + 1)}{\Gamma (\alpha \rho )\Gamma (1-\alpha \rho )\Gamma (\alpha +2)} e^{-\alpha \rho x} \Ghg {1}{\alpha \rho +1}{\alpha +2}{1-e^{-x}} , \label {jump density} \end{align} where the final line follows from [17, formula 3.197.5], and is to be understood in the sense of analytic continuation when \(x < 0\).

5. Wiener-Hopf factorisation.

We begin with a brief sketch of the Wiener-Hopf factorisation for Lévy processes, and refer the reader to [21, Chapter 6] or [2, VI.2] for further details, including proofs.

The Wiener-Hopf factorisation describes the characteristic exponent of a Lévy process in terms of the Laplace exponents of two subordinators. For our purposes, a subordinator is defined as an increasing Lévy process, possibly killed at an independent exponentially distributed time and sent to the cemetary state \(+\infty \). If \(H\) is a subordinator with expectation operator \(\LevE \), we define its Laplace exponent \(\phi \) by the equation

\[ \LevE \bigl [ \exp (-\lambda H_1)\bigr ] = \exp (-\phi (\lambda )), \for \lambda \ge 0 . \]

Standard theory allows us to analytically extend \(\phi (\lambda )\) to \(\{\lambda \in \mathbb {C}: \Re \lambda \geq 0\}\). Similarly, let \(\xi \) be a Lévy process, again with expectation \(\LevE \), and denote its characteristic exponent by \(\CE \), so that

\[ \LevE \bigl [ \exp (\iu \theta \xi _1) \bigr ] = \exp (-\CE (\theta )) , \for \theta \in \RR . \]

The Wiener-Hopf factorisation of \(\xi \) consists of the decomposition

\begin{equation} \label {the WHF}k \CE (\theta ) = \kappa (-\iu \theta ) \hat \kappa ( \iu \theta ) , \for \theta \in \RR , \end{equation}

where \(k > 0\) is a constant which may, without loss of generality, be taken equal to unity, and the functions \(\kappa \) and \(\hat \kappa \) are the Laplace exponents of certain subordinators which we denote \(H\) and \(\hat H\).

Any decomposition of the form (14) is unique, up to the constant \(k\), provided that the functions \(\kappa \) and \(\hat \kappa \) are Laplace exponents of subordinators. The exponents \(\kappa \) and \(\hat \kappa \) are termed the Wiener-Hopf factors of \(\xi \).

The subordinator \(H\) can be identified in law as an appropriate time change of the running maximum process \(\bar \xi : = (\bar \xi _t)_{t\geq 0}\), where \(\bar \xi _t = \sup \{ \xi _s, \, s \le t\}\). In particular, the range of \(H\) and \(\bar \xi \) are the same. Similarly, \(\hat H\) is equal in law to an appropriate time-change of \(-\underline {\xi }: = (-\underline {\xi })_{t\geq 0}\), with \(\underline \xi _t = \inf \{\xi _s, \, s \le t \}\), and they have the same range. Intuitively speaking, \(H\) and \(\hat H\) keep track of how \(\xi \) reaches its new maxima and minima, and they are therefore termed the ascending and descending ladder height processes associated to \(\xi \).

In Sections 5.4 and 5.5 we shall deduce in explicit form the Wiener-Hopf factors of \(\xi \) from its characteristic exponent. Analytically, we will need to distinguish the cases \(\alpha \in (0,1]\) and \(\alpha \in (1,2)\); in probabilistic terms, these correspond to the regimes where \(X\) cannot and can hit zero, respectively.

Accordingly, the outline of this section is as follows. We first introduce two classes of Lévy processes and two transformations of subordinators which will be used to identify the process \(\xi \) and the ladder processes \(H,\hat H\). We then present two subsections with the same structure: first a theorem identifying the factorisation and the ladder processes, and then a proposition collecting some further details of important characteristics of the ladder height processes, which will be used in the applications.

5.1. Hypergeometric Lévy processes.

A process is said to be a hypergeometric Lévy process with parameters \((\beta ,\gamma ,\hat \beta ,\hat \gamma )\) if it has characteristic exponent

\[ \frac {\Gamma (1-\beta +\gamma -\iu \theta )}{\Gamma (1-\beta -\iu \theta )} \frac {\Gamma (\hat \beta +\hat \gamma +\iu \theta )}{\Gamma (\hat \beta + \iu \theta )} , \for \theta \in \RR \]

and the parameters lie in the admissible set

\[ \bigl \{ \beta \le 1, \, \gamma \in (0,1), \, \hat \beta \ge 0, \, \hat \gamma \in (0,1) \bigr \} . \]

In Kuznetsov and Pardo [20] the authors derive the Lévy measure and Wiener-Hopf factorisation of such a process, and show that the processes \(\xi ^*\), \(\xi ^\uparrow \) and \(\xi ^\downarrow \) of Caballero and Chaumont [6] belong to this class; these are, respectively, the Lévy processes appearing in the Lamperti transform of the \(\alpha \)-stable process absorbed at zero, conditioned to stay positive and conditioned to hit zero continuously.

5.2. Lamperti-stable subordinators.

A Lamperti-stable subordinator is characterised by parameters in the admissible set

\[ \{ (q, \mathtt {a}, \beta , c, \LSdrift ) : \mathtt {a} \in (0,1),\ \beta \leq 1+\mathtt {a}, \, q, c, \LSdrift \ge 0 \} , \]

and it is defined as the (possibly killed) increasing Lévy process with killing rate \(q\), drift \(\LSdrift \), and Lévy density

\[ c \frac {e^{\beta x}}{(e^x-1)^{\mathtt {a}+1}}, \for x > 0 . \]

It is simple to see from [7, Theorem 3.1] that the Laplace exponent of such a process is given, for \(\lambda \ge 0\), by

\begin{equation} \label {LSS LE}\Phi (\lambda ) = q+ \LSdrift \lambda - c \Gamma (-\mathtt {a}) \left ( \frac {\Gamma (\lambda + 1 - \beta + \mathtt {a})}{\Gamma (\lambda + 1 - \beta )} - \frac {\Gamma (1-\beta +\mathtt {a})}{\Gamma (1-\beta )} \right ). \end{equation}

5.3. Esscher and \(\Ttrans _{\beta }\) transformations and special Bernstein functions. The Lamperti-stable subordinators just introduced will not be sufficient to identify the ladder processes associated to \(\xi \) in the case \(\alpha \in (1,2)\). We therefore introduce two transformations of subordinators in order to expand our repertoire of processes.

The first of these is the classical Esscher transformation, a generalisation of the Cameron-Girsanov-Martin transformation of Brownian motion. The second, the \(\Ttrans _\beta \) transformation, is more recent, but we will see that, in the cases we are concerned with, it is closely connected to the Esscher transform. We refer the reader to [21, §3.3] and [23, §2] respectively for details.

The following result is classical.

  • Lemma 5.1.  Let \(H\) be a subordinator with Laplace exponent \(\phi \), and let \(\beta > 0\). Define the function

    \[ \EsscherT _\beta \phi (\lambda ) = \phi (\lambda + \beta ) - \phi (\beta ) , \for \lambda \ge 0 . \]

    Then, \(\EsscherT _\beta \phi \) is the Laplace exponent of a subordinator, known as the Esscher transform of \(H\) (or of \(\phi \)).

    The Esscher transform of \(H\) has no killing and the same drift coefficient as \(H\), and if the Lévy measure of \(H\) is \(\Pi \), then its Esscher transform has Lévy measure \(e^{-\beta x} \Pi (\dd x)\).

Before giving the next theorem, we need to introduce the notions of special Bernstein function and conjugate subordinators, first defined by Song and Vondraček [33]. Consider a function \(\phi \colon [0,\infty ) \to \RR \), and define \(\phi ^* \colon [0,\infty ) \to \RR \) by

\[ \phi ^*(\lambda ) = \lambda /\phi (\lambda ) . \]

The function \(\phi \) is called a special Bernstein function if both \(\phi \) and \(\phi ^*\) are the Laplace exponents of subordinators. In this case, \(\phi \) and \(\phi ^*\) are said to be conjugate to one another, as are their corresponding subordinators.

  • Proposition 5.2Let \(H\) be a subordinator with Laplace exponent \(\phi \), and let \(\beta > 0\). Define

    \begin{equation} \label {eq:Tbeta}\Ttrans _\beta \phi (\lambda ) = \frac {\lambda }{\lambda +\beta } \phi (\lambda +\beta ) , \for \lambda \ge 0. \end{equation}

    Then \(\Ttrans _\beta \phi \) is the Laplace exponent of a subordinator with no killing and the same drift coefficient as \(H\).

    Furthermore, if \(\phi \) is a special Bernstein function conjugate to \(\phi ^*\), then \(\Ttrans _\beta \phi \) is a special Bernstein function conjugate to

    \[ \EsscherT _\beta \phi ^* + \phi ^*(\beta ) . \]

  • Proof. The first assertion is proved in Gnedin [16, p. 124] as the result of a path transformation, and directly, for spectrally negative Lévy processes (from which the case of subordinators is easily extracted) in Kyprianou and Patie [23]. The killing rate and drift coefficient can be read off as \(\Ttrans _\beta \phi (0)\) and \(\lim _{\lambda \to \infty } \Ttrans _\beta \phi (\lambda )/\lambda \).

    The second claim can be seen immediately by rewriting (16) as

    \[ \Ttrans _\beta \phi (\lambda ) = \frac {\lambda }{\phi ^*(\lambda +\beta )} \]

    and observing that \(\phi ^*(\lambda +\beta ) = \EsscherT _\beta \phi ^*(\lambda ) + \phi ^*(\beta )\) for \(\lambda \ge 0\). □

5.4. Wiener-Hopf factorisation for \(\alpha \in (0,1]\).
  • Theorem 5.3 (Wiener-Hopf factorisation)

    • (i) When \(\alpha \in (0,1]\), the Wiener-Hopf factorisation of \(\xi \) has components

      \begin{equation*} \kappa (\lambda ) = \frac {\Gamma (\alpha \rho +\lambda )}{\Gamma (\lambda )}, \qquad \hat \kappa (\lambda ) = \frac {\Gamma (1-\alpha \rho +\lambda )} {\Gamma (1-\alpha +\lambda )} , \for \lambda \ge 0. \end{equation*}

      Hence, \(\xi \) is a hypergeometric Lévy process with parameters

      \[ \bigl (\beta ,\gamma ,\hat \beta ,\hat \gamma \bigr ) = \bigl (1, \alpha \rho , 1-\alpha , \alpha \rhohat \bigr ). \]

    • (ii) The ascending ladder height process is a Lamperti-stable subordinator with parameters

      \[ \bigl (q, \mathtt {a}, \beta , c, \LSdrift \bigr ) = \left (0, \alpha \rho , \, 1, \, -\frac {1}{\Gamma (-\alpha \rho )}, \, 0 \right ). \]

    • (iii) The descending ladder height process is a Lamperti-stable subordinator with parameters

      \[ \bigl (q, \mathtt {a}, \beta , c, \LSdrift \bigr ) = \left ( \frac {\Gamma (1-\alpha \rho )}{\Gamma (1-\alpha )}, \alpha \rhohat , \, \alpha , \, -\frac {1}{\Gamma (-\alpha \rhohat )}, \, 0 \right ) , \]

      when \(\alpha < 1\), and

      \[ \bigl (q, \mathtt {a}, \beta , c, \LSdrift \bigr ) = \left ( 0, \alpha \rhohat , \, \alpha , \, -\frac {1}{\Gamma (-\alpha \rhohat )}, \, 0 \right ) , \]

      when \(\alpha = 1\).

    • Proof. First we compute \(\CECPP \) and \(\CELS \), the characteristic exponents of \(\xiCPP \) and \(\xiLS \). As \(\CECPP \) is a compound Poisson process with jump rate \(c_-/\alpha \) and jump distribution given by (9), we obtain, after using the reflection formula \(\Gamma (x)\Gamma (1-x) = \pi /\sin (\pi x)\), for \(\theta \in \RR \),

      \[ \CECPP (\theta ) = \frac {\Gamma (\alpha )}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )} \biggl ( 1 - \frac {\Gamma (1-\alpha \rho + \iu \theta ) \Gamma (\alpha \rho - \iu \theta ) \Gamma (1 + \iu \theta ) \Gamma (\alpha - \iu \theta ) }{\Gamma (\alpha \rho )\Gamma (1-\alpha \rho )\Gamma (\alpha ) } \biggr ) . \]

      On the other hand, [20, Theorem 1] provides an expression for the characteristic exponent \(\Psi ^*\) of the Lamperti-stable process \(\xi ^*\) from Section 2, and removing the killing from this gives us

      \[ \CELS (\theta ) = \frac {\Gamma (\alpha -\iu \theta )}{\Gamma (\alpha \rhohat - \iu \theta )} \frac {\Gamma (1+\iu \theta )}{\Gamma (1-\alpha \rhohat + \iu \theta )} - \frac {\Gamma (\alpha )}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )} . \]

      We can now compute, applying the reflection formula twice,

      \begin{align*} \CE (\theta ) &= \CELS (\theta ) + \CECPP (\theta ) \\ &= \Gamma (\alpha -\iu \theta )\Gamma (1+\iu \theta ) \\ &\quad {} \times \left (\frac {1}{\Gamma (\alpha \rhohat -\iu \theta )\Gamma (1-\alpha \rhohat +\iu \theta )} - \frac {\Gamma (1-\alpha \rho +\iu \theta )\Gamma (\alpha \rho -\iu \theta )} {\Gamma (\alpha \rho )\Gamma (1-\alpha \rho )\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )} \right ) \\ &= \Gamma (\alpha -\iu \theta )\Gamma (1+\iu \theta ) \Gamma (1-\alpha \rho +\iu \theta )\Gamma (\alpha \rho -\iu \theta ) \\ &\quad {} \times \left ( \frac {\sin (\pi (\alpha \rhohat -\iu \theta )) \sin (\pi (\alpha \rho -\iu \theta ))}{\pi ^2} - \frac {\sin (\pi \alpha \rhohat ) \sin (\pi \alpha \rho )}{\pi ^2} \right ). \end{align*} It may be proved, using product and sum identities for trigonometric functions, that

      \[ \sin (\pi (\alpha \rhohat - \iu \theta )) \sin (\pi (\alpha \rho -\iu \theta )) + \sin (\pi \iu \theta )\sin (\pi (\alpha - \iu \theta )) = \sin (\pi \alpha \rhohat )\sin (\pi \alpha \rho ). \]

      Again using the reflection formula twice, this leads to

      \begin{align} \CE (\theta ) &= \frac {\Gamma (\alpha -\iu \theta )\Gamma (1+\iu \theta )} {\Gamma (1+\iu \theta )\Gamma (-\iu \theta )} \frac {\Gamma (\alpha \rho -\iu \theta )\Gamma (1-\alpha \rho +\iu \theta )} {\Gamma (\alpha -\iu \theta )\Gamma (1-\alpha +\iu \theta )} \nonumber \\ &= \frac {\Gamma (\alpha \rho - \iu \theta )}{\Gamma (-\iu \theta )} \times \frac {\Gamma (1 - \alpha \rho + \iu \theta )}{\Gamma (1 - \alpha + \iu \theta )} . \label {small alpha CE} \end{align} Part (i) now follows by the uniqueness of the Wiener-Hopf factorisation, once we have identified \(\kappa \) and \(\hat \kappa \) as Laplace exponents of subordinators. Substituting the parameters in parts (ii) and (iii) into the formula (15) for the Laplace exponent of a Lamperti-stable subordinator, and adding killing in the case of part (iii), completes the proof. □

  • Proposition 5.4

    • (i) The process \(\xi \) has Lévy density

      \[ \LD (x) = \begin {cases} - \dfrac {1}{\Gamma (1-\alpha \rhohat )\Gamma (-\alpha \rho )} e^{-\alpha \rho x} \Ghg {1+\alpha \rho }{1}{1-\alpha \rhohat }{e^{-x}}, & x > 0, \\ - \dfrac {1}{\Gamma (1-\alpha \rho )\Gamma (-\alpha \rhohat )} e^{(1-\alpha \rho ) x} \Ghg {1+\alpha \rhohat }{1}{1-\alpha \rho }{e^{x}}, & x < 0. \end {cases} \]

    • (ii) The ascending ladder height has Lévy density

      \[ \pi _H(x) = - \frac {1}{\Gamma (-\alpha \rho )} e^x (e^x-1)^{-(\alpha \rho +1)}, \for x > 0 . \]

      The ascending renewal measure \(U(\dd x) = \LevE \int _0^\infty \Indic {H_t \in \dd x} \, \dd t\) is given by

      \[ U(\dd x)/\dd x = \frac {1}{\Gamma (\alpha \rho )} (1-e^{-x})^{\alpha \rho -1} , \for x > 0 . \]

    • (iii) The descending ladder height has Lévy density

      \[ \pi _{\hat H}(x) = - \frac {1}{\Gamma (-\alpha \rhohat )} e^{\alpha x} (e^x-1)^{-(\alpha \rhohat +1)}, \for x > 0. \]

      The descending renewal measure is given by

      \[ \hat U(\dd x)/\dd x = \frac {1}{\Gamma (\alpha \rhohat )} (1-e^{-x})^{\alpha \rhohat -1} e^{-(1-\alpha )x} , \for x > 0. \]

    • Proof. The Lévy density of \(\xi \) follows from [20, Proposition 1], and the expressions for \(\pi _H\) and \(\pi _{\hat H}\) are obtained by substituting in Section 5.2. The renewal measures can be verified using the Laplace transform identity

      \[ \int _0^\infty e^{-\lambda x} U(\dd x) = 1/\kappa (\lambda ), \for \lambda \ge 0, \]

      and the corresponding identity for the descending ladder height. □

5.5. Wiener-Hopf factorisation for \(\alpha \in (1,2)\).
  • Theorem 5.5 (Wiener-Hopf factorisation)

    • (i) When \(\alpha \in (1,2)\), the Wiener-Hopf factorisation of \(\xi \) has components

      \begin{equation*} \kappa (\lambda ) = (\alpha - 1 + \lambda ) \frac {\Gamma (\alpha \rho + \lambda )}{\Gamma (1 + \lambda )}, \qquad \hat \kappa (\lambda ) = \lambda \frac {\Gamma (1 - \alpha \rho + \lambda )}{\Gamma (2 - \alpha + \lambda )} , \for \lambda \ge 0 . \end{equation*}

    • (ii) The ascending ladder height process can be identified as the conjugate subordinator (see Section 5.3) to \(\Ttrans _{\alpha - 1}\psi ^*\), where

      \[ \psi ^*(\lambda ) = \frac {\Gamma (2-\alpha +\lambda )}{\Gamma (1-\alpha \rhohat +\lambda )} , \for \lambda \ge 0 \]

      is the Laplace exponent of a Lamperti-stable process. This Lamperti-stable process has parameters

      \[ \bigl ( q, \mathtt {a}, \, \beta , \, c , \, \LSdrift \bigr ) = \biggl (\frac {\Gamma (2-\alpha )}{\Gamma (1-\alpha \rhohat )}, 1-\alpha \rho , \, \alpha \rhohat , \, - \frac {1}{\Gamma (\alpha \rho -1)}, \, 0 \biggr ). \]

    • (iii) The descending ladder process is the conjugate subordinator to a Lamperti-stable process with Laplace exponent

      \[ \phi ^*(\lambda ) = \frac {\Gamma (2-\alpha +\lambda )}{\Gamma (1-\alpha \rho +\lambda )} , \for \lambda \ge 0, \]

      which has parameters

      \[ \bigl ( q, \mathtt {a}, \, \beta , \, c, \, \LSdrift \bigr ) = \biggl (\frac {\Gamma (2-\alpha )}{\Gamma (1-\alpha \rho )} , 1-\alpha \rhohat , \, \alpha \rho , \, - \frac {1}{\Gamma (\alpha \rhohat -1)}, \, 0 \biggr ).\]

    • Proof. Returning to the proof of Theorem 5.3(i), observe that the derivation of (17) does not depend on the value of \(\alpha \). However, the factorisation for \(\alpha \in (0,1]\) does not apply when \(\alpha \in (1,2)\) because, for example, the expression for \(\hat \kappa \) is equal to zero at \(\alpha -1>0\) which contradicts the requirement that it be the Laplace exponent of a subordinator.

      Now, applying the identity \(x \Gamma (x) = \Gamma (x+1)\) to each denominator in that expression, we obtain for \(\theta \in \RR \)

      \[ \CE (\theta ) = (\alpha - 1 - \iu \theta ) \frac {\Gamma (\alpha \rho - \iu \theta )}{\Gamma (1-\iu \theta )} \times \iu \theta \frac {\Gamma (1-\alpha \rho +\iu \theta )}{\Gamma (2-\alpha +\iu \theta )} . \]

      Once again, the uniqueness of the Wiener-Hopf factorisation is sufficient to prove part (i) once we know that \(\kappa \) and \(\hat \kappa \) are Laplace exponents of subordinators, and so we now prove (iii) and (ii), in that order.

      To prove (iii), note that Example 2 in Kyprianou and Rivero [24] shows that \(\phi ^*\) is a special Bernstein function, conjugate to \(\hat \kappa \). The fact that \(\phi ^*\) is the Laplace exponent of the given Lamperti-stable process follows, as before, by substituting the parameters in (iii) into (15).

      For (ii), first observe that

      \[ \kappa (\lambda ) = \lambda \frac {\alpha - 1 + \lambda }{\lambda } \frac {\Gamma (\alpha \rho +\lambda )}{\Gamma (1+\lambda )} = \frac {\lambda } {\Ttrans _{\alpha - 1} \psi ^*(\lambda )} . \]

      It follows again from [24, Example 2] that \(\psi ^*\) is a special Bernstein function, and then Proposition 5.2 implies that \(\Ttrans _{\alpha -1}\psi ^*\) is also a special Bernstein function, conjugate to \(\kappa \). The rest of the claim about \(\psi ^*\) follows as for part (iii). □

  • Remark 5.6 There is another way to view the ascending ladder height, which is often more convenient for calculation. Applying the second part of Proposition 5.2, we find that

    \[ \kappa (\lambda ) = \EsscherT _{\alpha -1}\psi (\lambda ) + \psi (\alpha -1) , \]

    where \(\psi \) is conjugate to \(\psi ^*\). Hence, \(H\) can be seen as the Esscher transform of the subordinator conjugate to \(\psi ^*\), with additional killing.

  • Proposition 5.7

    • (i) The process \(\xi \) has Lévy density

      \[ \LD (x) = \begin {cases} - \dfrac {1}{\Gamma (1-\alpha \rhohat )\Gamma (-\alpha \rho )} e^{-\alpha \rho x} \Ghg {1+\alpha \rho }{1}{1-\alpha \rhohat }{e^{-x}}, & x > 0, \\ - \dfrac {1}{\Gamma (1-\alpha \rho )\Gamma (-\alpha \rhohat )} e^{(1-\alpha \rho ) x} \Ghg {1+\alpha \rhohat }{1}{1-\alpha \rho }{e^{x}}, & x < 0. \end {cases} \]

    • (ii) The ascending ladder height has Lévy density

      \[ \pi _H(x) = \frac {(e^x-1)^{-(\alpha \rho +1)}}{\Gamma (1-\alpha \rho )} \bigl ( \alpha - 1 + (1-\alpha \rhohat )e^x \bigr ) , \for x > 0. \]

      The ascending renewal measure \(U(\dd x) = \LevE \int _0^\infty \Indic {H_t \in \dd x}\, \dd t\) is given by

      \[ U(\dd x)/\dd x = e^{-(\alpha -1)x} \biggl [ \frac {\Gamma (2-\alpha )}{\Gamma (1-\alpha \rhohat )} + \frac {1-\alpha \rho }{\Gamma (\alpha \rho )} \int _x^\infty e^{\alpha \rhohat z} (e^z - 1)^{\alpha \rho -2}\, \dd z \biggr ], \]

      for \(x > 0\).

    • (iii) The descending ladder height has Lévy density

      \[ \pi _{\hat H}(x) = \frac {e^{(\alpha - 1)x} (e^x-1)^{-(\alpha \rhohat +1)}}{\Gamma (1-\alpha \rhohat )} \bigl ( \alpha - 1 + (1-\alpha \rho )e^x \bigr ) , \for x > 0. \]

      The descending renewal measure is given by

      \[ \hat U(\dd x)/\dd x = \frac {\Gamma (2-\alpha )}{\Gamma (1-\alpha \rho )} + \frac {1-\alpha \rhohat }{\Gamma (\alpha \rhohat )} \int _x^\infty e^{\alpha \rho z} (e^z - 1)^{\alpha \rhohat -2}\, \dd z , \for x > 0.\]

    • Proof. As before, we will prove (i), and then (iii) and (ii) in that order.

      (i) When \(\alpha \in (1,2)\), the process \(\xi \) no longer falls in the class of hypergeometric Lévy processes. Therefore, although the characteristic exponent \(\CE \) is the same as it was in Proposition 5.4, we can no longer rely on [20], and need to calculate the Lévy density ourselves.

      Multiplying the jump density (13) of \(\xiCPP \) by \(c_-/\alpha \), we can obtain an expression for its Lévy density \(\LDCPP \) in terms of a \(\Ghgsymb \) function. When we apply the relations [17, formulas 9.131.1–2], we obtain

      \[ \LDCPP (x) = \begin {cases} - \dfrac {1}{\Gamma (1-\alpha \rhohat )\Gamma (-\alpha \rho )} e^{-\alpha \rho x} \Ghg {1+\alpha \rho }{1}{1-\alpha \rhohat }{e^{-x}} & \\ \quad {} + \dfrac {\Gamma (\alpha +1)}{\Gamma (1+\alpha \rho )\Gamma (-\alpha \rho )} e^{-\alpha x} \Ghg {1+\alpha \rhohat }{\alpha +1}{1+\alpha \rhohat }{e^{-x}}, & x > 0, \\ -\dfrac {1}{\Gamma (1-\alpha \rho )\Gamma (-\alpha \rhohat )} e^{(1-\alpha \rho )x} \Ghg {1+\alpha \rhohat }{1}{1-\alpha \rho }{e^x} & \\ \quad {} - \dfrac {\Gamma (\alpha +1)}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )} e^x \Ghg {1+\alpha \rho }{\alpha +1}{1+\alpha \rho }{e^x}, & x < 0 . \end {cases} \]

      Recall that \(\Ghg {a}{b}{a}{z} = (1-z)^{-b}\). Then, comparing with (6), the equation reads

      \[ \LDCPP (x) = \LD (x) - \LDLS (x), \quad x \ne 0 , \]

      where \(\LDLS \) is the Lévy density of \(\xiLS \). The claim then follows by the independence of \(\xiCPP \) and \(\xiLS \).

      (iii) In [24, Example 2], the authors give the tail of the Lévy measure \(\Pi _{\hat H}\), and show that it is absolutely continuous. The density \(\pi _{\hat H}\) is obtained by differentiation.

      In order to obtain the renewal measure, start with the following standard observation. For \(\lambda \ge 0\),

      \begin{equation} \int _0^\infty e^{-\lambda x} \hat U(\dd x) = \frac {1}{\hat \kappa (\lambda )} \\ = \frac {\phi ^*(\lambda )}{\lambda } \\ = \int _0^\infty e^{-\lambda x} \overline {\Pi }_{\phi ^*}(x)\, \dd x , \label {*} \end{equation}

      where \(\overline {\Pi }_{\phi ^*}(x) = q_{\phi ^*} + \Pi _{\phi ^*}(x,\infty )\), and \(q_{\phi ^*}\) and \(\Pi _{\phi ^*}\) are, respectively, the killing rate and Lévy measure of the subordinator corresponding to \(\phi ^*\). Comparing with section 5.2, we have

      \[ q_{\phi ^*} = \frac {\Gamma (2-\alpha )}{\Gamma (1-\alpha \rho )}, \qquad \Pi _{\phi ^*}(\dd x)/\dd x = - \frac {1}{\Gamma (\alpha \rhohat -1)} e^{\alpha \rho x} (e^x-1)^{\alpha \rhohat -2} , \for x > 0, \]

      and substituting these back into (18) leads immediately to the desired expression for \(\hat U\).

      (ii) To obtain the Lévy density, it is perhaps easier to use the representation of \(H\) as corresponding to a killed Esscher transform, noted in Remark 5.6. As in part (iii), applying [24, Example 2] gives

      \[ \pi _{\psi }(x) = \frac {e^{(\alpha - 1)x} (e^x-1)^{-(\alpha \rho +1)}}{\Gamma (1-\alpha \rho )} \bigl ( \alpha - 1 + (1-\alpha \rhohat )e^x \bigr ) , \for x > 0, \]

      where \(\pi _{\psi }\) is the Lévy density corresponding to \(\psi (\lambda ) = \lambda /\psi ^*(\lambda )\). The effect of the Esscher transform on the Lévy measure gives

      \[ \pi _H(x) = e^{-(\alpha - 1)x} \pi _\psi (x), \for x > 0, \]

      and putting everything together we obtain the required expression.

      Emulating the proof of (iii), we calculate

      \[ \int _0^\infty e^{-\lambda x} U(\dd x) = \frac {1}{\kappa (\lambda )} = \frac {\psi ^*(\alpha - 1 + \lambda )}{\alpha - 1 + \lambda } = \int _0^\infty e^{-\lambda x} e^{-(\alpha -1) x} \overline {\Pi }_{\psi ^*}(x)\, \dd x , \]

      using similar notation to previously, and the density of \(\hat U\) follows. □

6. Proofs of main results.

In this section, we use the Wiener-Hopf factorisation of \(\xi \) to prove Theorems 1.1 and 1.4 and deduce Corollary 1.2. We then make use of a connection with the process conditioned to stay positive in order to prove Theorem 1.5.

Our method for proving each theorem will be to prove a corresponding result for the Lévy process \(\xi \), and to relate this to the \(\alpha \)-stable process \(X\) by means of the Lamperti transform and censoring. In this respect, the following observation is elementary but crucial. Let

\[ \tau _0^b = \inf \{ t > 0 : X_t \in (0,b) \} \]

be the first time at which \(X\) enters the interval \((0,b)\), where \(b < 1\), and

\[ S_a^- = \inf \{ s > 0 : \xi _s < a \} \]

the first passage of \(\xi \) below the negative level \(a\). Notice that, if \(e^a = b\), then

\[ S_a^- < \infty \text {, and } \xi _{S_a^-} \le x \iff \tau _0^b < \infty \text {, and } X_{\tau _0^b} \le e^x .\]

We will use this relationship several times.

Our first task is to prove Theorem 1.1. We split the proof into two parts, based on the value of \(\alpha \). In principle, the method which we use for \(\alpha \in (0,1]\) extends to the \(\alpha \in (1,2)\) regime; however, it requires the evaluation of an integral including the descending renewal measure. For \(\alpha \in (1,2)\) we have been unable to calculate this in closed form, and have instead used a method based on the Laplace transform. Conversely, the second method could be applied in the case \(\alpha \in (0,1]\); however, it is less transparent.

  • Proof of Theorem 1.1, α ∈ (0, 1].  We begin by finding a related law for \(\xi \). By [2, Proposition III.2], for \(a < 0\),

    \begin{align*} \LevP _0(\xi _{S_a^-} \in \dd w) &= \LevP _0(- \hat H_{S_{-a}^+} \in \dd w) \\ &= \int _{[0,-a]} \hat U(\dd z) \pi _{\hat H}(-w-z) \, \dd w . \end{align*} Using the expressions obtained in Section 5 and changing variables,

    \begin{align} \LevP _0\bigl (\xi _{S_a^-} \in \dd w\bigr ) &= \frac {\alpha \rhohat e^{-\alpha w} \, \dd w} {\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )} \int _0^{1-e^a} t^{\alpha \rhohat -1} ( e^{-w} - 1 - e^{-w}t)^{-\alpha \rhohat - 1} \, \dd t \nonumber \\ &= \frac {\alpha \rhohat \, \dd w} {\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )} e^{-\alpha \rho w} (e^{-w}-1)^{-1} \int _0^{\frac {1-e^a}{1-e^w}} s^{\alpha \rhohat -1}(1-s)^{-\alpha \rhohat -1} \, \dd s \nonumber \\ &= \frac {1}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha \rhohat )} (1-e^a)^{\alpha \rhohat } e^{(1-\alpha \rho )w} (1-e^w)^{-1} (e^a-e^w)^{-\alpha \rhohat } \, \dd w , \label {xi passage down} \end{align} where the last equality can be reached by [17, formula 8.391] and the formula \(\Ghg {a}{b}{a}{z} = (1-z)^{-b}\).

    Denoting by \(f(a, w)\) the density on the right-hand side of (19), the relationship between \(\xi _{S_a^-}\) and \(X_{\tau _0^b}\) yields that

    \[ g(b, z) := \stP _1(X_{\tau _0^b} \in \dd z)/\dd z = z^{-1} f(\log b, \log z), \for b < 1,\,\, z \in (0, b). \]

    Finally, using the scaling property we obtain

    \begin{align*} \frac {\stP _x\bigl (X_{\tau _{-1}^1} \in \dd y\bigr )}{\dd y} &= \frac {1}{x+1} g\biggl ( \frac {2}{x+1}, \frac {y+1}{x+1} \biggr ) \\ &= \frac {1}{y+1} f \Biggl ( \log \biggl (\frac {2}{x+1}\biggr ) , \log \biggl (\frac {y+1}{x+1}\biggr ) \Biggr ) \\ &= \frac {\sin (\pi \alpha \rhohat )}{\pi } (x+1)^{\alpha \rho } (x-1)^{\alpha \rhohat } (1+y)^{-\alpha \rho } (1-y)^{-\alpha \rhohat } (x-y)^{-1} , \end{align*} for \(y \in (-1,1)\). □

  • Proof of Theorem 1.1, α ∈ (1, 2).  We begin with the “second factorisation identity” [21, Exercise 6.7] for the process \(\xi \), adapted to passage below a level:

    \[ \int _0^\infty \int \exp (qa-\beta y)\, \LevP _0(a - \xi _{S_a^-}\in {\rm d}y)\, \dd a = \frac {\hat \kappa (q) - \hat \kappa (\beta )}{(q-\beta ) \hat \kappa (q)} , \for a < 0, \, q, \beta > 0. \]

    A lengthy calculation, which we omit, inverts the two Laplace transforms to give the overshoot distribution for \(\xi \),

    \begin{align*} f(a, w) &:= \frac {\LevP _0(a - \xi _{S_a^-} \in \dd w)}{\dd w} \\ &= \frac {\sin (\pi \alpha \rhohat )}{\pi } e^{-(1-\alpha \rho )w} (1-e^{-w})^{-\alpha \rhohat } \\ & \quad {} \times \biggl [ e^{(1-\alpha )a} (1-e^a)^{\alpha \rhohat } e^{-w} (e^{-a}-e^{-w})^{-1} \biggr . \\ & \qquad \quad \biggl . {} - (\alpha \rho - 1) \int _0^{1-e^{a}} t^{\alpha \rhohat -1} (1-t)^{1-\alpha } \, \dd t \biggr ] , \end{align*} for \(a < 0, w > 0\). Essentially the same argument as in the \(\alpha \in (0,1]\) case gives the required hitting distribution for \(X\),

    \begin{align} \frac {\stP _x(X_{\tau _{-1}^1} \in \dd y)}{\dd y} &= \frac {1}{y+1} f \Biggl ( \log \biggl (\frac {2}{x+1}\biggr ) , \log \biggl (\frac {2}{y+1}\biggr ) \Biggr ) \nonumber \\ &= \frac {\sin (\pi \alpha \rhohat )}{\pi } (1+y)^{-\alpha \rho } (1-y)^{-\alpha \rhohat } \label {**} \\ &\phantom {{} =} {} \times \biggl [ (y+1) (x-1)^{\alpha \rhohat } (x+1)^{\alpha \rho -1} (x-y)^{-1} \biggr . \nonumber \\ &\phantom {{} = {} \times \biggl [} {} - (\alpha \rho - 1) 2^{\alpha -1} \int _0^{\frac {x-1}{x+1}} t^{\alpha \rhohat -1} (1-t)^{1-\alpha } \, \dd t \biggr ], \nonumber \end{align} for \(x > 1\), \(y \in (-1,1)\).

    By the substitution \(t = \frac {s-1}{s+1}\),

    \begin{align*} &2^{\alpha -1} \int _0^{\frac {x-1}{x+1}} t^{\alpha \rhohat -1} (1-t)^{1-\alpha } \, \dd t = 2 \int _1^x (s-1)^{\alpha \rhohat -1} (s+1)^{\alpha \rho -2} \, \dd s \\ &\quad = \int _1^x (s-1)^{\alpha \rhohat -1} (s+1)^{\alpha \rho -1} \, \dd s - \int _1^x (s-1)^{\alpha \rhohat } (s+1)^{\alpha \rho -2} \, \dd s . \end{align*} Now evaluating the second term on the right hand side above via integration by parts and substituting back into (20) yields the required law. □

  • Remark 6.1.  It is worth noting that in recent work, Kuznetsov, Kyprianou and Pardo [19], the law of the position of first entry of a so-called Meromorphic Lévy process into an interval was computed as a convergent series of exponential densities by solving a pair of simultaneous non-linear equations; see Rogozin [29] for the original use of this method in the context of first passage of \(\alpha \)-stable processes when exiting a finite interval. In principle the method of solving two simultaneous non-linear equations (that is, writing the law of first entry in \((-1,1)\) from \(x>1\) in terms of the law of first entry in \((-1,1)\) from \(x<-1\) and vice-versa) may provide a way of proving Theorem 1.1. However it is unlikely that this would present a more convenient approach because of the complexity of the two non-linear equations involved and because of the issue of proving uniqueness of their solution. Finally we note that Kadankova and Veraverbeke [18] also consider the formalities of this approach when dealing with first entry into a finite interval for Lévy processes.

  • Proof of Corollary 1.2.  This will follow by integrating out Theorem 1.1. First making the substitutions \(z = (y+1)/2\) and \(w = \frac {1-z}{1-2z/(x+1)}\), we obtain

    \begin{align*} &\!\!\stP _x(\tau _{-1}^1 < \infty ) \\ &= \frac {\sin (\pi \alpha \rhohat )}{\pi } (x+1)^{\alpha \rho } (x-1)^{\alpha \rhohat } \int _{-1}^1 (1+u)^{-\alpha \rho } (1-u)^{-\alpha \rhohat } (x-u)^{-1} \, \dd u \\ &= \frac {\sin (\pi \alpha \rhohat )}{\pi } (x+1)^{\alpha \rho } (x-1)^{\alpha \rhohat } 2^{1-\alpha } \int _0^1 z^{-\alpha \rho } (1-z)^{-\alpha \rhohat } \biggl (1-\frac {2}{x+1} z\biggr )^{-1} \, \dd z \\ &= \frac {\sin (\pi \alpha \rhohat )}{\pi } \biggl ( \frac {2}{x+1} \biggr )^{1-\alpha } \int _0^1 w^{-\alpha \rhohat } (1-w)^{-\alpha \rho } \biggl (1-\frac {2}{x+1} w\biggr )^{\alpha -1} \, \dd w \\ &= \frac {\Gamma (1-\alpha \rho )}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha )} \int _0^{\frac {2}{x+1}} s^{-\alpha } (1-s)^{\alpha \rhohat -1} \, \dd s, \end{align*} where the last line follows by [17, formulas 3.197.3, 8.391]. Finally, substituting \(t = 1-s\), it follows that

    \[ \stP _x(\tau _{-1}^1 = \infty ) = \frac {\Gamma (1-\alpha \rho )}{\Gamma (\alpha \rhohat )\Gamma (1-\alpha )} \int _0^{\frac {x-1}{x+1}} t^{\alpha \rhohat - 1} (1-t)^{-\alpha } \, \dd t , \]

    and this was our aim. □

  • Proof of Proposition 1.3.  In Port [26, §3, Remark 3], the author establishes, for \(s > 0\), the hitting distribution of \([0,s]\) for a spectrally positive \(\alpha \)-stable process started at \(x < 0\). In our situation, we have a spectrally negative \(\alpha \)-stable process \(X\), and so the dual process \(\hat X\) is spectrally positive:

    \begin{align*} \stP _x(X_{\tau _{-1}^1} \in \dd y) &= \stPhat _{1-x}(\hat X_{\tau _0^2} \in 1 - \dd y) \\ &= f_{1-x}(1-y) \, \dd y + \gamma (1-x) \, \delta _{-1}(\dd y), \end{align*} using the notation from [26] in the final line. Port gives expressions for \(f_{1-x}\) and \(\gamma \) which differ somewhat from the density and atom seen in our Proposition 1.3; our expression

    \[ f_{1-x}(1-y) = \frac {\sin (\pi (\alpha -1))}{\pi } (x-1)^{\alpha -1} (1-y)^{1-\alpha } (x-y)^{-1} \Ind _{(-1,1)}(y), \]

    is obtained from Port’s by evaluating an integral, and one may compute \(\gamma (1-x)\) similarly.

    We now prove weak convergence. For this purpose, the identity (20) is more convenient than the final expression in Theorem 1.1. Let us denote the right-hand side of (20), treated as the density of a measure on \([-1,1]\), by the function \(g_\rho \colon [-1,1] \to \RR \), so that

    \begin{align*} g_\rho (y) &= \frac {\sin (\pi \alpha \rhohat )}{\pi } (x-1)^{\alpha \rhohat } (x+1)^{\alpha \rho -1} (1+y)^{1-\alpha \rho } (1-y)^{-\alpha \rhohat } \\ & \quad {} + (1-\alpha \rho ) \frac {\sin (\pi \alpha \rhohat )}{\pi } 2^{\alpha -1} (1+y)^{-\alpha \rho } (1-y)^{-\alpha \rhohat } \int _0^{\frac {x-1}{x+1}} t^{\alpha \rhohat -1} (1-t)^{1-\alpha } \, \dd t, \end{align*} for \(y \in (-1,1)\), and we set \(g_\rho (-1) = g_\rho (1) = 0\) for definiteness.

    As we take the limit \(\rho \to 1/\alpha \), \(g_\rho (y)\) converges pointwise to \(f_{1-x}(1-y)\). Furthermore, the functions \(g_\rho \) are dominated by a function \(h \colon [-1,1] \to \RR \) of the form

    \[ h(y) = C (1-y)^{1-\alpha } (x-y)^{-1} + D (1+y)^{-1} (1-y)^{1-\alpha } , \for y \in (-1,1) \]

    for some \(C,D \ge 0\) depending only on \(x\) and \(\alpha \); again we set
    \(h(-1) = h(1) = 0\).

    Let \(z > -1\). The function \(h\) is integrable on \([z,1]\), and therefore dominated convergence yields

    \[ \int _{[z,1]} g_\rho (y) \, \dd y \to \int _{[z,1]} f_{1-x}(1-y) \, \dd y = \stP _x(X_{\tau _{-1}^1} \ge z), \]

    while

    \[ \int _{[-1,1]} g_\rho (y) \, \dd y = 1 = \stP _x(X_{\tau _{-1}^1} \ge -1) , \]

    and this is sufficient for weak convergence. □

  • Proof of Theorem 1.4.  We begin by determining a killed potential for \(\xi \). Let

    \[ u(p, w) \, \dd w = \LevE _p\int _0^{S_0^-} \Indic {\xi _s \in \dd w} \, \dd s , \for p, \, w > 0, \]

    if this density exists. Using an identity of Silverstein (see Bertoin [2, Theorem VI.20], or Silverstein [32, Theorem 6]), and the fact that the renewal measures of \(\xi \) are absolutely continuous, we find that the density \(u(p,\cdot )\) does exist, and

    \[ u(p,w) = \begin {cases} \displaystyle \int _{p-w}^p \hat v(z) v(w+z-p) \, \dd z , & 0 < w < p, \\ \displaystyle \int _0^p \hat v(z) v(w+z-p) \, \dd z , & w > p, \end {cases} \]

    where \(v\) and \(\hat v\) are the ascending and descending renewal densities from Proposition 5.4. For \(w > p\),

    \begin{align*} u(p,w) &= \frac {1}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \int _0^p (1-e^{-z})^{\alpha \rhohat -1} e^{(1-\alpha )z} (1-e^{p-w} e^{-z})^{\alpha \rho -1} \, \dd z \\ &= \frac {(1-e^{p-w})^{\alpha -1}}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \biggl ( \frac {1-e^{-p}}{1-e^{-w}}\biggr )^{\alpha \rhohat } \int _0^1 s^{\alpha \rhohat -1} \biggl (1- \frac {1-e^{-p}}{1-e^{-w}}s\biggr )^{-\alpha } \, \dd s, \end{align*} where we have used the substitution \(1 - \frac {e^{-z}-e^{-p}}{1-e^{-p}} = s(1-q+qs)^{-1}\) with \(q = \frac {e^{-p}-1}{e^{w-p}-1}\). Finally we conclude that

    \[ u(p,w) = \frac {(e^{p-w}-1)^{\alpha -1}}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \int _0^{\frac {1-e^{-w}}{1-e^{-p}}} t^{\alpha \rhohat -1} (1-t)^{-\alpha } \, \dd t, \for w > p. \]

    The calculation for \(0 < w < p\) is very similar, and in summary we have

    \[ u(p,w) = \begin {cases} \dfrac {(e^{p-w}-1)^{\alpha -1}}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \displaystyle \int _0^{\frac {1-e^{-w}}{1-e^{-p}}} t^{\alpha \rho -1} (1-t)^{-\alpha } \dd t , & 0 < w < p, \\ \dfrac {(1-e^{p-w})^{\alpha -1}}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \displaystyle \int _0^{\frac {1-e^{-p}}{1-e^{-w}}} t^{\alpha \rhohat -1} (1-t)^{-\alpha } \dd t , & w > p. \end {cases} \]

    We can now start to calculate the killed potential for \(X\). Let

    \[ \bar u(b,z)\, \dd z = \stE _1 \int _0^{\tau _0^b} \Indic {X_t \in \dd z} \, \dd t , \for 0 < b < 1, \, z > b . \]

    Let us recall now the censoring method and the Lamperti transform described in Section 3. We defined \(\dd A_t = \Indic {X_t > 0} \, \dd t\), denoted by \(\gamma \) the right-inverse of \(A\), and defined \(Y_t = X_{\gamma (t)}\Indic {t < T_0}\) for \(t \ge 0\). Furthermore, from the Lamperti transform, \(\dd t = \exp (\alpha \xi _{S(t)}) \, \dd S(t)\), where \(S\) is the Lamperti time change. As before, we write \(T\) for the inverse time-change to \(S\). Finally, the measure \(\stP _x\) for the stable process \(X\) (and the pssMp \(Y\)) corresponds under the Lamperti transform to the measure \(\LevP _{\log x}\); in particular, \(\stP _1\) corresponds to \(\LevP _0\), and \(\stE _1\) to \(\LevE _0\).

    With this in mind, we make the following calculation.

    \begin{align*} \bar u(b,z) \, \dd z &= \stE _1 \int _0^{\tau _0^b(X)} \Indic {X_t \in \dd z} \, \dd A_t = \stE _1 \int _0^{\tau _0^b(Y)} \Indic {Y_t \in \dd z} \, \dd t \\ &= \LevE _0 \int _0^{T(S_a^-)} \Indic {\exp (\xi _{S(t)}) \in \dd z} \exp (\alpha \xi _{S(t)}) \, \dd S(t) \\ &= z^\alpha \LevE _0 \int _0^{S_a^-} \Indic {\exp (\xi _s) \in \dd z} \, \dd s = z^\alpha \LevE _{-a} \int _0^{S_0^-} \Indic {\exp (\xi _s + a) \in \dd z} \, \dd s , \end{align*} where \(a = \log b\), and, for clarity, we have written \(\tau _0^b(Z)\) for the hitting time of \((0,b)\) calculated for a process \(Z\). Hence,

    \[ \bar u(b,z) = z^{\alpha -1} u(\log {b^{-1}}, \log {b^{-1}z}), \for 0 < b < 1, \, z > b \]

    Finally, a scaling argument yields the following. For \(x \in (0,1)\) and \(y > 1\),

    \begin{align*} &\!\! \stE _x \int _0^{\tau _{-1}^1} \Indic {X_t \in \dd y} \, \dd t / \dd y \\ &= (x+1)^{\alpha -1} \bar u\biggl ( \frac {2}{x+1}, \, \frac {y+1}{x+1} \biggr ) \\ &= (y+1)^{\alpha -1} u\biggl ( \log \frac {x+1}{2} , \, \log \frac {y+1}{2} \biggr ) \\ &= \begin {cases} \dfrac {(x-y)^{\alpha -1}}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \dint _0^{\frac {y-1}{y+1} \frac {x+1}{x-1}} t^{\alpha \rho -1} (1-t)^{-\alpha } \, \dd t, & 1 < y < x, \\ \dfrac {(y-x)^{\alpha -1}}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} \dint _0^{\frac {y+1}{y-1} \frac {x-1}{x+1}} t^{\alpha \rhohat -1} (1-t)^{-\alpha } \, \dd t, & y > x . \end {cases} \end{align*} The integral substitution \(t = \frac {s-1}{s+1}\) gives the form in the theorem. □

We now turn to the problem of first passage upward before hitting a point. To tackle this problem, we will use the stable process conditioned to stay positive. This process has been studied by a number of authors; for a general account of conditioning to stay positive, see for example Chaumont and Doney [9]. If \(X\) is the standard \(\alpha \)-stable process defined in the introduction and

\[ \tau _0^- = \inf ( t \ge 0 : X_t < 0 ) \]

is the first passage time below zero, then the process conditioned to stay positive, denoted \(\Xup \), with probability laws \((\stPup _x)_{x > 0}\), is defined as the Doob \(h\)-transform of the killed process \(\bigl ( X_t \Indic {t < \tau _0^-}, \, t \ge 0 \bigr )\) under the invariant function

\[ h(x) = x^{\alpha \rhohat } . \]

That is, if \(T\) is any a.s. finite stopping time, \(Z\) an \(\FF _T\) measurable random variable, and \(x > 0\), then

\[ \stEup _x(Z) = \stE _x \biggl [ Z \frac {h(X_T)}{h(x)} , \, T < \tau _0^- \biggr ] . \]

In fact we will make use of this construction for the dual process \(\hat X\), with invariant function \(\hat h(x) = x^{\alpha \rho }\), and accordingly we will denote the conditioned process by \(\Xhatup \) and use \((\stPhatup _x)_{x > 0}\) for its probability laws. It is known that the process \(\Xhatup \) is a strong Markov process which drifts to \(+\infty \).

Caballero and Chaumont [6] show that the process \(\Xhatup \) is a pssMp, and so we can apply the Lamperti transform to it. We will denote the Lévy process associated to \(\Xhatup \) by \(\xihatup \) with probability laws \(( \LevPhatup _y)_{y >0 }\). The crucial observation here is that \(\Xhatup \) hits the point \(1\) if and only if its Lamperti transform, \(\hat \xi ^\uparrow \), hits the point \(0\).

We now have all the apparatus in place to begin the proof.

  • Proof of Theorem 1.5.  For each \(y \in \RR \), let \(\tau _y\) be the first hitting time of the point \(y\), and let \(\tau _y^+\) and \(\tau _y^-\) be the first hitting times of the sets \((y,\infty )\) and \((-\infty ,y)\), respectively. When \(\alpha \in (1,2)\), these are all a.s. finite stopping times for the \(\alpha \)-stable process \(X\) and its dual \(\hat X\). Then, when \(x \in (-\infty ,1)\),

    \begin{align} \label {X-xiup} \stP _x(\tau _0 < \tau _1^+) = \stP _{x-1}(\tau _{-1} < \tau _0^+) &= \stPhat _{1-x}(\tau _1 < \tau _0^-) \nonumber \\ &= \hat h(1-x) \stEhat _{1-x}\biggl [ \Indic {\tau _1 < \infty } \frac {\hat h(\hat X_{\tau _1})}{\hat h(1-x)} , \, \tau _1 < \tau _0^- \biggr ] \nonumber \\ &= (1-x)^{\alpha \rho } \stPhatup _{1-x}(\tau _1 < \infty ), \end{align} where we have used the definition of \(\stPhatup _\cdot \) at \(\tau _1\). (Note that, to unify notation, the various stopping times refer to the canonical process for each measure.)

    We now use facts coming from Bertoin [2, Proposition II.18 and Theorem II.19]. Provided that the potential measure \(U = \LevEhatup _0 \int _0^\infty \Indic {\xihatup \in \cdot } \, \dd t\) is absolutely continuous and there is a bounded continuous version of its density, say \(u\), then the following holds:

    \begin{equation} \stPhatup _{1-x}(\tau _1 < \infty ) = \LevPhatup _{\log (1-x)}(\tau _0 < \infty ) = C u\bigl (-\log (1-x)\bigr ), \label {HP-potential} \end{equation}

    where \(C\) is the capacity of \(\{0\}\) for the process \(\xihatup \).

    Therefore, we have reduced our problem to that of finding a bounded, continuous version of the potential density of \(\xihatup \) under \(\LevPhatup _0\). Provided the renewal measures of \(\xihatup \) are absolutely continuous, it is readily deduced from Silverstein’s identity [2, Theorem VI.20] that a potential density \(u\) exists and is given by

    \[ u(y) = \begin {cases} k \int _0^\infty v(y+z) \hat v(z) \, \dd z, & y > 0 , \\ k \int _{-y}^\infty v(y+z) \hat v(z) \, \dd z, & y < 0, \end {cases} \]

    where \(v\) and \(\hat v\) are the ascending and descending renewal densities of the process \(\xihatup \), and \(k\) is the constant in the Wiener-Hopf factorisation (14) of \(\xihatup \).

    The work of Kyprianou, Pardo and Rivero [22] gives the Wiener-Hopf factorisation of \(\xihatup \), shows that the renewal measures are absolutely continuous and computes their densities, albeit for a different normalisation of the \(\alpha \)-stable process \(X\). In our normalisation, the renewal densities are given by

    \[ v(z) = \frac {1}{\Gamma (\alpha \rhohat )} (1-e^{-z})^{\alpha \rhohat -1} , \qquad \hat v(z) = \frac {1}{\Gamma (\alpha \rho )} e^{-z} (1-e^{-z})^{\alpha \rho -1}, \]

    and \(k = 1\). See, for example, the computations in [20], where the normalisation of the \(\alpha \)-stable process agrees with ours. It then follows, with similar calculations to those in the proof of Theorem 1.4,

    \[ u(y) = \begin {cases} \frac {1}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} (1-e^{-y})^{\alpha -1} e^{\alpha \rho y} \int _0^{e^{-y}} t^{\alpha \rho -1} (1-t)^{-\alpha }\, \dd t, & y > 0, \\ \frac {1}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )} (1-e^y)^{\alpha -1} e^{(1-\alpha \rhohat )y} \int _0^{e^y} t^{\alpha \rhohat -1} (1-t)^{-\alpha } \, \dd t, & y < 0 . \end {cases} \]

    This \(u\) is the bounded continuous density which we seek, so by substituting into (22) and (21), we arrive at the hitting probability

    \begin{equation} \label {HP-nearly} \stP _x(\tau _0 < \tau _1^+) = \begin {cases} C^\prime x^{\alpha -1} \int _0^{1-x} t^{\alpha \rho -1} (1-t)^{-\alpha } \, \dd t , & 0 < x < 1, \\ C^\prime (-x)^{\alpha -1} \int _0^{(1-x)^{-1}} t^{\alpha \rhohat -1} (1-t)^{-\alpha }\, \dd t , & x < 0, \end {cases} \end{equation}

    where \(C^\prime = \frac {C}{\Gamma (\alpha \rho )\Gamma (\alpha \rhohat )}\). It only remains to determine the unknown constant here, which we will do by taking the limit \(x \upto 0\) in (23). First we manipulate the second expression above, by recognising that \(1 = t + (1-t)\) and integrating by parts. For \(x < 0\),

    \begin{align*} &\stP _x(\tau _0 < \tau _1^+) \\ &\quad = C^\prime (-x)^{\alpha -1} \Biggl [ \int _0^{(1-x)^{-1}} t^{\alpha \rhohat } (1-t)^{-\alpha } \, \dd t + \int _0^{(1-x)^{-1}} t^{\alpha \rhohat -1} (1-t)^{1-\alpha } \, \dd t \Biggr ] \\ &\quad = \frac {C^\prime }{\alpha -1} \Biggl [ (1-x)^{\alpha \rho -1} - (1-\alpha \rho ) (-x)^{\alpha -1} \int _0^{(1-x)^{-1}} t^{\alpha \rhohat -1} (1-t)^{1-\alpha } \, \dd t \Biggr ]. \end{align*} Now taking \(x \upto 0\), we find that \(C^\prime = \alpha -1\).

    Finally, we obtain the expression required by performing the integral substitution \(s = 1/(1-t)\) in (23). □

Acknowledgements. The authors would like to thank the anonymous referees, whose comments have led to great improvements in the paper.

References.
  • [1]  Bertoin, J. (1993). Splitting at the infimum and excursions in half-lines for random walks and Lévy processes. Stochastic Process. Appl. 47 17–35. MR1232850

  • [2]  Bertoin, J. (1996). Lévy processes. Cambridge Tracts in Mathematics 121. Cambridge University Press, Cambridge. MR1406564

  • [3]  Blumenthal, R. M., Getoor, R. K. and Ray, D. B. (1961). On the distribution of first hits for the symmetric stable processes. Trans. Amer. Math. Soc. 99 540–554. MR0126885

  • [4]  Blumenthal, R. M. and Getoor, R. K. (1968). Markov processes and potential theory. Pure and Applied Mathematics, Vol. 29. Academic Press, New York. MR0264757

  • [5]  Bogdan, K., Burdzy, K. and Chen, Z.-Q. (2003). Censored stable processes. Probab. Theory Related Fields 127 89–152. MR2006232

  • [6]  Caballero, M. E. and Chaumont, L. (2006). Conditioned stable Lévy processes and the Lamperti representation. J. Appl. Probab. 43 967–983. MR2274630

  • [7]  Caballero, M. E., Pardo, J. C. and Pérez, J. L. (2010). On Lamperti stable processes. Probab. Math. Statist. 30 1–28. MR2792485

  • [8]  Caballero, M. E., Pardo, J. C. and Pérez, J. L. (2011). Explicit identities for Lévy processes associated to symmetric stable processes. Bernoulli 17 34–59. MR2797981

  • [9]  Chaumont, L. and Doney, R. A. (2005). On Lévy processes conditioned to stay positive. Electron. J. Probab. 10 948–961. MR2164035

  • [10]  Chaumont, L., Kyprianou, A. E. and Pardo, J. C. (2009). Some explicit identities associated with positive self-similar Markov processes. Stochastic Process. Appl. 119 980–1000. MR2499867

  • [11]  Chaumont, L., Panti, H. and Rivero, V. (8 November 2011). The Lamperti representation of real-valued self-similar Markov processes. Preprint, hal-00639336, version 1. URL http://hal.archives-ouvertes.fr/hal-00639336.

  • [12]  Doney, R. A. and Kyprianou, A. E. (2006). Overshoots and undershoots of Lévy processes. Ann. Appl. Probab. 16 91–106. MR2209337

  • [13]  Fitzsimmons, P. J. (2006). On the existence of recurrent extensions of self-similar Markov processes. Electron. Comm. Probab. 11 230–241. MR2266714

  • [14]  Getoor, R. K. (1961). First passage times for symmetric stable processes in space. Trans. Amer. Math. Soc. 101 75–90. MR0137148

  • [15]  Getoor, R. K. (1966). Continuous additive functionals of a Markov process with applications to processes with independent increments. J. Math. Anal. Appl. 13 132–153. MR0185663

  • [16]  Gnedin, A. V. (2010). Regeneration in random combinatorial structures. Probab. Surv. 7 105–156. MR2684164

  • [17]  Gradshteyn, I. S. and Ryzhik, I. M. (2007). Table of integrals, series, and products, Seventh ed. Elsevier/Academic Press, Amsterdam. Translated from the Russian. Translation edited and with a preface by Alan Jeffrey and Daniel Zwillinger. MR2360010

  • [18]  Kadankova, T. V. and Veraverbeke, N. (2007). On several two-boundary problems for a particular class of Lévy processes. J. Theoret. Probab. 20 1073–1085. MR2359069

  • [19]  Kuznetsov, A., Kyprianou, A. E. and Pardo, J. C. (2012). Meromorphic Lévy processes and their fluctuation identities. Ann. Appl. Probab. 22 1101–1135.

  • [20]  Kuznetsov, A. and Pardo, J. C. (2010). Fluctuations of stable processes and exponential functionals of hypergeometric Lévy processes. arXiv:1012.0817v1 [math.PR].

  • [21]  Kyprianou, A. E. (2006). Introductory lectures on fluctuations of Lévy processes with applications. Universitext. Springer-Verlag, Berlin. MR2250061

  • [22]  Kyprianou, A. E., Pardo, J. C. and Rivero, V. (2010). Exact and asymptotic \(n\)-tuple laws at first and last passage. Ann. Appl. Probab. 20 522–564. MR2650041

  • [23]  Kyprianou, A. E. and Patie, P. (2011). A Ciesielski-Taylor type identity for positive self-similar Markov processes. Ann. Inst. H. Poincaré Probab. Statist. 47 917–928. MR2848004

  • [24]  Kyprianou, A. E. and Rivero, V. (2008). Special, conjugate and complete scale functions for spectrally negative Lévy processes. Electron. J. Probab. 13 1672-1701. MR2448127

  • [25]  Lamperti, J. (1972). Semi-stable Markov processes. I. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 22 205–225. MR0307358

  • [26]  Port, S. C. (1967). Hitting times and potentials for recurrent stable processes. J. Analyse Math. 20 371–395. MR0217877

  • [27]  Rivero, V. (2005). Recurrent extensions of self-similar Markov processes and Cramér’s condition. Bernoulli 11 471–509. MR2146891

  • [28]  Rogers, L. C. G. and Williams, D. (2000). Diffusions, Markov processes, and martingales. Vol. 1. Cambridge Mathematical Library. Cambridge University Press, Cambridge. MR1796539

  • [29]  Rogozin, B. A. (1971). The distribution of the first ladder moment and height and fluctuation of a random walk. Theory Probab. Appl. 16 575–595.

  • [30]  Rogozin, B. A. (1972). The distribution of the first hit for stable and asymptotically stable walks on an interval. Theory Probab. Appl. 17 332–338.

  • [31]  Sato, K.-i. (1999). Lévy processes and infinitely divisible distributions. Cambridge Studies in Advanced Mathematics 68. Cambridge University Press, Cambridge. MR1739520

  • [32]  Silverstein, M. L. (1980). Classification of coharmonic and coinvariant functions for a Lévy process. Ann. Probab. 8 539–575. MR573292

  • [33]  Song, R. and Vondraček, Z. (2006). Potential theory of special subordinators and subordinate killed stable processes. J. Theoret. Probab. 19 817–847. MR2279605

  • [34]  Vuolle-Apiala, J. (1994). Itô excursion theory for self-similar Markov processes. Ann. Probab. 22 546–565. MR1288123

A. E. Kyprianou, A. R. Watson
Department of Mathematical Sciences
University of Bath
Bath, BA2 7AY
United Kingdom.
E-mail: a.kyprianou@bath.ac.uk
aw295@bath.ac.uk

  

J. C. Pardo
CIMAT A.C.
Calle Jalisco s/n
C.P.36240, Guanajuato, Gto.
Mexico.
E-mail: jcpardo@cimat.mx