0, \ell \in \ubar{\Lambda}_x. \end{equation} Hence $J_P= \ubar I$ on $[0,|\mu|]$. Moreover, recalling that $H_P(x)=\{h : I_C(h)= \ubar I(x),\linebreak P(h) = 2x\}$, \begin{equation}\label{eq: inf attained <} H_P(x)=\left\{t \mapsto x \ell_x t\right\}, \quad x \in \left(r_{min}, |\mu|\right]. \end{equation} Indeed, the facts that $\ubar I$ is strictly decreasing on $(r_{min}, |\mu|]$ and that Jensen's inequality~\eqref{eqn: Jensen direct} for the strictly convex rate function $I$ turns into equality only on functions with a.e. constant derivative, imply that the minimum in~\eqref{eq: rate function I_P} is attained only on functions $h \in AC_0[0,1]$ that satisfy $P(h)= 2 |h(1)|=2x$, that is $h (t)= x \ell t$ for some $\ell \in \Sbb^1$. The unique function $h$ of this form that satisfies the equality $I_C(h) = \ubar I(x)$ corresponds to the direction $\ell_x$. The equality in~\eqref{eq: LD per < E} for $x \in (r_{min}, |\mu|]$ now follows from the LDP for the perimeters $(P_n/(2n))_{n\,\ge\,1}$ proved in Part~\eqref{theo2.3.1}. In fact, we have $\Jcl_P = \ubar I$ on $(r_{min}, |\mu|]$. On this interval $\ubar I$ is decreasing and convex (see Lemma~\ref{lem: properties of I_}$\MK$\eqref{item: I_ unimodal}), hence continuous, and so the set $[0, x]$, which corresponds to the event $\{P_n\le 2xn\}$, is regular for the rate function $\Jcl_P$. The claim in~\eqref{eq: LD per < E} for $x = r_{min}$ holds trivially by $P_n \ge 2 r_{min} n$ a.s. Consider now the case $[|\mu|, \infty)$. Our main estimate follows from the inequality $I(v) \ge \conv \ubar I(|v|)$, $v \in \Rbb^2$, and Jensen's inequality applied with the convex function $\conv \ubar I$. For any $h \in AC_0[0,1]$, we have \begin{equation}\label{eq: I_C >=} \begin{aligned} I_C(h) &= \int_0^1 I(h'(t)) dt \ge \int_0^1 \conv \ubar I(|h'(t)|) dt \ge \conv \ubar I \left(\int_0^1 |h'(t) | dt\right) \\ &= \conv \ubar I \left(\Var(h) \right), \end{aligned} \end{equation} where $\Var(h)$ denotes the total variation, i.e. the length, of a curve $h \in C_0[0,1]$. Now use the following well-known inequality (see Corollary~\ref{cor:length of curve} in the Appendix), which is even referred to as geometric ``folklore'': $\Var(h) \ge \frac12 P(h)$ for any $h \in C_0[0,1]$ of bounded variation. Since the function $\ubar I$ increases on $[|\mu|, r_{max}]$, so does its largest convex minorant $\conv \ubar I$. With the above, from~\eqref{eq: I_C >=} we get: for $x \ge |\mu|$, \begin{multline}\label{eq: I_C >= 2} \Jcl_P(x)=\min_{h\,:\,P(h)\,=\,2x} I_C(h) \ge \min_{h\,:\, \Var(h)\,\ge\,x} I_C(h) \\ \ge \min_{h\,:\,\Var(h)\,\ge\,x} \left(\conv \ubar I \left(\Var(h) \right) \right) \ge \min_{r\,\ge\,x} \left(\conv \ubar I \left(r \right) \right) = \conv \ubar I(x). \end{multline} Using~\eqref{eq: I_C(xlt)} for an upper bound, this gives $\conv \ubar I \leqslant \Jcl_P \leqslant \ubar I $ on $[|\mu|, \infty)$. We claim that if $\ubar I(x) = \conv \ubar I(x)$ for $x \ge |\mu|$, then \begin{equation}\label{eq: inf attained >} H_P(x)= \left\{t \mapsto x \ell t, \ell \in \ubar{\Lambda}_x \right\}. \end{equation} We first note that by $\conv \ubar I \leqslant \Jcl_P \leqslant \ubar I $ and the assumption $\ubar I(x) = \conv \ubar I(x)$, all inequalities in~\eqref{eq: I_C >= 2} are equalities. Then, since $\conv \ubar I$ is strictly increasing on $[|\mu|, r_{max})$, the infima in~\eqref{eq: I_C >= 2} are attained on the functions $h \in C_0[0,1]$ that satisfy $\Var(h) = \frac12 P(h)=x$. By Corollary~\ref{cor:length of curve} in the Appendix, such functions have the form $h(t) = |h(t)| \ell$ a.e. $t$ for some $\ell \in \Sbb^1$ and satisfy $\Var(h)=x$. Further, the second inequality in~\eqref{eq: I_C >=} is an equality iff $|h'(t)| \in [x_1, x_2]$ a.e. $t$, where $[x_1, x_2]$ is the maximal by inclusion interval that contains $x$ and is such that the restriction of $\conv \ubar{I}$ on $[x_1, x_2]$ is affine. Finally, the first inequality in~\eqref{eq: I_C >=} is an equality for a function $h \in AC_0[0,1]$ that satisfies the conditions above iff \[ \left|h'(t)\right| \in \left\{y \in [x_1, x_2]: I(y \ell) = \conv \ubar{I}(y) \right\} =: L_x\text{ a.e. }t \] with the direction $\ell$ which was already fixed above. Since the rate function $I$ is strictly convex, so is $I(\cdot \ell)$, hence $L_x=\{x\}$. Thus we obtained that $|h'(t)| = x$ a.e. $t$ and by $I(x \ell) = \ubar I(x)$, we have $\ell \in \ubar{\Lambda}_x$. This finishes the proof of~\eqref{eq: inf attained >}. It remains to prove~\eqref{eq: LD per > E}. In general, for an $x \in [|\mu|, r_{max}]$ we can not assure regularity of the set $[x, \infty)$ (corresponding to the event $\{P_n \ge 2xn\}$) for the rate function $\Jcl_P$. The upper bound in~\eqref{eq: LD per > E} immediately follows from the LDP for the perimeters $(P_n/(2n))_{n\,\ge\,1}$ we proved in Part 1 and the inequality $\conv \ubar I \leqslant \Jcl_P$ (cf.~the upper bound in~\eqref{eq: LD general} and~\eqref{eq: I_C >= 2}, respectively). For the lower bound in~\eqref{eq: LD per > E}, we consider two cases. If $\ubar I$ is continuous at $x$, then we use the inequality $\Jcl_P \leqslant \ubar I $ and the LDP for the perimeters (cf.~the lower bound in~\eqref{eq: LD general}). If $\ubar I$ is discontinuous at $x$, then by Lemma~\ref{lem: properties of I_}$\MK$\eqref{item: I_ discontinuous}, the distribution of $X_1$ has atoms at the points of $x \ubar{\Lambda}_x$, which must have equal weights satisfying $\ubar I(x)= - \log \Pbb(X_1 = x \ell)$ for $\ell \in \ubar{\Lambda}_x$. Then \[ \Pbb(P_n \ge 2 x n) \ge \Pbb\left(S_k = k x \ell, k=1,\,\ldots,\, n \, \text{ for some } \ell \in \ubar{\Lambda}_x) = \#(\ubar{\Lambda}_x\right) e^{-n \ubar{I}(x)}, \] which gives the lower bound in~\eqref{eq: LD per > E}. The proof of~\eqref{eq: LD per > E} is now finished. \eqref{theo2.3.3} The claims follow from the general statement~\eqref{eq: LDP conditional} combined with~\eqref{eq: inf attained <}, \eqref{eq: inf attained >} and using that $\max_{0\,\le\,k\,\le \,n} | S_k/n - h(k/n) | \le \max_{0\,\le\,t\,\le\,1} |S_n(t) - h(t)|$ for any $h \in C_0[0,1]$.\qedhere \end{proof} \goodbreak \begin{proof}[Proof of Theorem~\ref{thm: LD area}.] Our argument is fully based on the ideas we developed in the proof of Theorem~\ref{thm: perimeter LD}.\ \\*[-1.3em] \eqref{theo2.11.1} Denote by $A(h)$ the area of the convex hull of a curve $h \in C_0[0,1]$, i.e. $A(h):= A(\conv(\im h))$. It follows from the Steiner formula~\eqref{eq: Steiner} that $A$ is a continuous functional on $C_0[0,1]$. From~\eqref{eq: conve hull}, one has \[ A(S_n(\cdot)/n)= A_n/n^2. \] This equality, Mogulskii's LDP for trajectories of random walks (see Section~\ref{Sec: basics on LDP}), and the contraction principle (\cite[Theorem~4.2.1]{DemboZeitouni}) for continuous mappings yield that the sequence $(A_n/n^2)_{n\,\ge\,1}$ satisfies an LDP in $\Rbb$ with speed $n$ and the tight rate function \begin{equation}\label{eq: rate function I_A} \Jcl_A(a)=\inf_{h\,\in\,C_0\,[0,\,1]\,: \, A(h)\,=\,a} I_C(h) = \min_{h\,\in\,AC_0\,[0,\,1]\,:\, A(h)\,=\,a} I_C(h), \end{equation} where, recall, $I_C$ is given by~\eqref{eq: I def}. This implies~\eqref{eq: rate func I_A}. We used that the lower semi-continuous non-negative function $I_C$ on $C_0[0,1]$ has compact sub-level sets and therefore it always attains its infimum over the closed set $\{A(h)=a\}$. Clearly, $\Jcl_A(a)=0$. Let us check that $\Jcl_A$ is strictly increasing on the set $\Dcl_{\Jcl_A}$. This assertion is trivial if this set is $\{0\}$, otherwise pick a positive $a \in \Dcl_{\Jcl_A}$. Then $\Jcl_A(a)=I_C(h)$ for some $h \in C_0[0,1]$ such that $A(h)=a$. Clearly, $h'$ is not constant a.e. on $[0,1]$ since otherwise $A(h)=0$. Consider the function $h_s$ such that $h_s(t)=(t/s)h(s) $ for $t \in [0,s]$ and $h_s=h$ on $[s,1]$, where $s \in (0,1]$; put $h_0:=h$. The area $A(h_s)$ decreases in $s$ and satisfies $A(h_0)=A(h)$, $A(h_1)=0$. By Jensen's inequality, we also have $I_C(h_s) \le I_C(h)$. From strict convexity of $I$, this inequality is strict if $A(h_s) < A(h)$. Since $A(h_s)$ is continuous in $s \in [0,1]$, these inequalities imply that $\Jcl_A(a_1) < A(h) =\Jcl_A(a)$ for any $a_1 \in [0, a)$. Thus, the rate function $\Jcl_A$ is left-continuous on $\Dcl_{\Jcl_A}$ since it is lower semi-continuous and increasing. Then~\eqref{eq: LDP A} follows from (and is easily seen to be equivalent to) the LDP for the areas $(A_n/n^2)_{n \,\ge\,1}$. Finally, \eqref{eq: A shape} holds by the general result~\eqref{eq: LDP conditional}. \eqref{theo2.11.2} The isoperimetric inequality for convex hulls, \begin{equation}\label{eq: isoperimetric} A(h) \le \Var(h)^2 /(2 \pi), \end{equation} is valid for any function $h \in C_0[0,1]$ of bounded variation. This is Ulam's version of the classical Dido problem, solved by Moran~\cite{Moran}. We have $\ubar I = \conv \ubar I$ by convexity of $\ubar I$, which follows from rotational invariance of the distribution of $X_1$. Then by~\eqref{eq: I_C >=} and~\eqref{eq: rate function I_A}, for $ a \ge 0$ \[ \Jcl_A(a)= \min_{h\,\in\,C_0\,[0,\,1]\,:\,A(h)\,=\,a} I_C(h) \ge \min_{h\,:\,\Var(h)^2\,\ge\,2\,\pi\,a} I_C(h) \ge \min_{h\,:\,\Var(h)\,\ge\,\sqrt{2\,\pi\,a}} \ubar I (\Var(h)), \] hence $\Jcl_A(a) \ge \ubar I (\sqrt{2 \pi a}) $ by~\eqref{eq: isoperimetric}. These three inequalities actually are equalities, with the minima attained only on the functions that parametrize half circles with the constant speed $\sqrt{2 \pi a }$, and thus~\eqref{eq: A shape invar} holds true. In fact, the value of $I_C$ on such a function $h$ is exactly $\ubar I (\sqrt{2 \pi a})$. Since $\ubar I (\sqrt{2 \pi a})$ strictly increases for $a \in [0, r_{max}]$, it must be that $a=A(h) = \Var(h)^2 /(2 \pi)$ and $|h'(t)|$ is constant for a.e.~$t$, ensuring that the second inequality in~\eqref{eq: I_C >=} is an equality. And the isoperimetric inequality~\eqref{eq: isoperimetric} is an equality only on parametrizations of semi-circles; see Tilli~\cite{Tilli}. Finally, by Lemma~\ref{lem: properties of I_}$\MK$\eqref{item: I_ unimodal}, the rate function $\Jcl_A(a)$ is continuous on $[0, \infty)$, hence \eqref{eq: LDP A} is valid for every $a \ge 0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: LD Gaussian area}.] We need to find the rate function $\Jcl_A$ given by~\eqref{eq: rate func I_A}. For any $h \in AC_0[0,1]$, by Jensen's inequality we have \begin{align*} I_C(h)& = \frac12 \int_0^1 \left|h'(t)-\mu\right|^2 dt \\ &= \frac12 \int_0^1\left|h'(t)\right|^2 dt - h(1) \cdot \mu + \frac12 |\mu|^2 \ge \frac12 \Var(h)^2 - h(1) \cdot \mu + \frac12 |\mu|^2, \end{align*} where the inequality is an equality iff $|h'(t)|=\Var(h)$ for a.e. $t$. Hence, using that $\Var(h)$ is invariant under rotations of the image of $h$ about $0$, we get \begin{equation}\label{eq: inf Gaussian shifted} \begin{aligned} \Jcl_A(a) &= \min_{h\,\in\,C_0\,[0,\,1]\,:\,A(h)\,=\,a} I_C(h) \\ &= \min_{r\,\ge\,0} \left(- r |\mu| + \min_{\substack{h\,:\,A(h)\,=\,a,\\ h(1)\,=\,r\,\mu / |\mu|}} \frac12 \Var(h)^2 \right) + \frac12 |\mu|^2, \quad a \ge 0. \end{aligned} \end{equation} Assume $a>0$. It follows immediately from an approximation argument and the result by Pach~\cite{pach1978isoperimetric} for polygonal lines (see his Theorem~2 and the Remark just after it) that the above minimum over $h$ with the fixed endpoint $h(1)$ is attained only on parametrizations $h$ of circular arcs with $A(h)=a$. Denote by $R$ the radius of such an arc and by $2 \varphi$ its angle, where $R>0$ and $0 \le \varphi \le \pi$. Then $\Var(h)\linebreak= 2 \varphi R$, $\sin \varphi = r/(2 R)$, and $A(h)= \varphi R^2 - \frac12 r R \cos \varphi$ in both cases $0 \le \varphi \le \pi/2$ and $\pi/2 \le \varphi \le \pi$. Due to the fact that $\varphi / \sin \varphi$ is strictly increasing on $[0, \pi]$, the mapping $(\varphi, R) \mapsto (r, V)$ is a bijection between the sets $[0, \pi) \times (0,\infty)$ and $\{(r, V) \in [0, \infty) \times (0, \infty): r \le V \}$. Hence~\eqref{eq: inf Gaussian shifted} reduces to \begin{equation}\label{eq: minimization} \Jcl_A(a) =\frac12 |\mu|^2 + 2 \min_{\substack{0\,\le\,\varphi \,\le\,\pi,\,R\,\ge\,0\,: \\ R^2\,\left(\varphi - \frac12 \sin 2 \varphi\right)\,=\,a}} \left(\varphi^2 R^2 - |\mu| R \sin \varphi \right). \end{equation} Note that $R(\varphi)=\sqrt{\text{\small{$a/(\varphi - \frac12 \sin 2 \varphi)$}}}$ satisfies $\frac{\partial R}{\partial \varphi} = -a^{-1} R^3 \sin^2 \varphi$. The values of the function $\varphi^2 R(\varphi)^2 - |\mu| R(\varphi) \sin \varphi$ at $0$, $\pi/$, $\pi$ are respectively $+\infty$, $\pi a/2$, $\pi a$, hence this function attains its minimum at a critical point inside $(0, \pi)$ satisfying \[ 2 \varphi R^2 + 2 \varphi^2 R \frac{\partial R}{\partial \varphi} = |\mu| \frac{\partial R}{\partial \varphi} \sin \varphi + |\mu| R \cos \varphi. \] Dividing by $R^4$ and substituting the expression for $\frac{\partial R}{\partial \varphi}$ gives \[ \frac{2 \varphi}{a}\left(\varphi - \sin \varphi \cos \varphi - \varphi \sin^2 \varphi\right) = \frac{|\mu|}{a R} \left(- \sin^3 \varphi + \left(\varphi - \sin \varphi \cos \varphi\right)\cos \varphi\right). \] Then $2 \varphi(\varphi \cos^2 \varphi - \sin \varphi \cos \varphi) = \frac{|\mu|}{R} (- \sin \varphi + \varphi \cos \varphi)$, and using that $\varphi \neq \tan \varphi $ on $(0, \pi/2)$, \begin{equation}\label{eq: radius} R = \frac{|\mu|}{2 \varphi \cos \varphi}, \end{equation} which is possible only when $\varphi \in (0, \pi/2)$. This gives \begin{equation}\label{eq: a=} \frac{a}{|\mu|^2} = \frac{2\varphi - \sin 2 \varphi}{8 \varphi^2 \cos^2 \varphi}. \end{equation} It easy to check that this equation has only one solution $\varphi \in [0, \pi/2)$ for every $a\ge 0$. In fact, the right-hand side of~\eqref{eq: a=} equals zero at $\varphi =0$ and $+\infty$ at $\varphi = \pi/2$, and its derivative \[ \frac{1}{2\varphi^2 \cos^3\varphi} \left( \cos^2 \varphi \sin \varphi + \varphi^2\sin \varphi - \varphi \cos \varphi \right) \] is positive on $ (0, \pi/2)$ by \begin{multline*} \cos^2 \varphi \sin \varphi + \varphi^2\sin \varphi - \varphi \cos \varphi> \cos^2 \varphi \sin \varphi + \sin^3 \varphi - \varphi \cos \varphi\\ =\sin \varphi -\varphi \cos \varphi= \cos \varphi (\tan \varphi - \varphi)>0. \end{multline*} Substituting~\eqref{eq: radius} into~\eqref{eq: minimization} and using~\eqref{eq: a=}, we obtain \[ \Jcl_A(a) = \inf_{h\,\in\,C_0\,[0,\,1]\,:\,A(h)\,=\,a} I_C(h) = \frac{|\mu|^2}{2} \left(\frac{1}{\cos^2 \varphi} - \frac{2 \tan \varphi}{\varphi} + 1 \right) = 4 \varphi a -\frac12 |\mu|^2 \tan^2 \varphi. \] Clearly, this function is continuous in $a$, hence~\eqref{eq: LDP A} is valid for every $a \ge 0$. The infimum is attained only at either of the two $\mu$-axially symmetric curves \[ R \big(\sin(2 \varphi t - \varphi) + \sin \varphi,\pm \cos(2 \varphi t - \varphi) \mp \cos \varphi \big), \] where $\varphi$ and $R$ are given by~\eqref{eq: radius} and~\eqref{eq: a=}. This yields~\eqref{eq: A shape shifted}. \end{proof} \subsection{The LDP's in continuous time} \label{sec: continuous time proof} Here we obtain LDP's for convex hulls of L\'evy processes by reduction to random walks. \begin{proof}[Proof of Theorem~\ref{thm: LD Levy}] First consider the perimeter $\sfP_T$ of the convex hull $\sfC_T= \conv(\{S_t\}_{0\,\le\,t\,\le\,T}) $ of the L\'evy process $(S_t)_{t\,\ge\,0}$. We shall compare it with the perimeter $P_{[T]}$ of the convex hull $C_{[T]}=\conv(0, S_1,\, \ldots,\,S_{[T]})$ of the random walk $(S_t)_{t\,\in\,\N\,\cup\, \{0\}}$. It follows from Cauchy's formula~\eqref{eq: Cauchy} that \[ 0 \le \left(\sfP_T - P_{[T]}\right)/(2 \pi) \le \max_{k\,\in\, \{0,\,1,\,\ldots,\,[T]\}} \sup_{k\,\le\,t\,\le\,k+1}\left|S_t - S_k\right| =:d_T, \] where $d_T$ is an upper bound for the Hausdorff distance between $\sfC_T$ and $C_{[T]}$. Let us estimate probabilities of large deviations of $d_T/T$. By stationarity of increments of $(S_t)_{t\,\ge\,0}$, for every $\varepsilon>0$ we have \[ \Pbb \left(d_T\ge \varepsilon T \right) \le ([T] +1) \Pbb \left(\sup_{0\,\le\,t\,\le\,1} |S_t | \ge \varepsilon T \right). \] Put $\tilde S_t :=S_t - t \mu$ for $t \ge 0$ (recall that $S_1 = X_1$) and let $\tilde S_t^{(1)}$ and $\tilde S_t^{(2)}$ be the coordinates of $\tilde S_t$ is any orthonormal basis of $\Rbb^2$. Note that \[ \sup_{0\,\le\,t\,\le\,1} |S_t | \le |\mu| + \sup_{0\,\le\,t\,\le \,1} |\tilde S_t | \le |\mu| + \max_{i,\,j\,\in\,\{1,\,2\}} \sup_{0\,\le\,t\,\le\,1} \left((-1)^i \tilde S_t^{(j)} \right). \] Denote $a:= \varepsilon T- |\mu|$. Then for any $u>0$ we get \begin{align*} \Pbb \left(\sup_{0\,\le\,t\,\le\,1} |S_t | \ge \varepsilon T \right) &\le \sum_{i,\,j\,=\,1}^2 \Pbb \left(\sup_{0\,\le\,t\, \le\,1} \left((-1)^i \tilde S_t^{(j)} \right) \ge a \right) \\ &= \sum_{i,\,j\,=\,1}^2 \Pbb \left(\sup_{0\,\le\,tv\le\,1} e^{u \,(-1)^i\,\tilde S_t^{(j)}} \ge e^{u a} \right). \end{align*} Since $(\tilde S_t)_{t\,\ge\,0}$ is a zero-mean L\'evy process in $\Rbb^2$, each of the four stochastic proce\-sses$((-1)^i \tilde S_t^{(j)})_{t\,\ge\,0}$ is a right-continuous real-valued martingale. Then $ (e^{u (-1)^i \tilde S_t^{(j)}})_{t \,\ge\,0}$ are right-continuous positive sub-martingales, because $x \mapsto e^{u x}$ is a positive convex function of $ x \in \Rbb$. Hence, applying Doob's maximal inequality (Revuz and Yor~\cite[Theorem~1.7, Chapter~II]{RevuzYor}), we obtain \begin{multline*} \Pbb \left(\sup_{0\,\le\,t\,\le\,1} |S_t | \ge \varepsilon T \right) \le \sum_{i,\,j\,=\,1}^2 e^{-ua } \E e^{u (-1)^i \tilde S_1^{(j)}}\\ = \sum_{i,\,j\,=\,1}^2 \exp \left\{- \left(ua - \log \E e^{u\,(-1)^i\,\tilde S_1^{(j)}} \right) \right\}. \end{multline*} Finally, if $a>0$ (where $0=\E \tilde S_1^{(j)} $), then optimizing the last expression over $u >0$ yields \[ \Pbb \left(d_T \ge \varepsilon T \right) \le ([T] +1) \sum_{i,\, j\,=\,1}^2 \exp \left\{- I_{i, j} \left(\varepsilon T - |\mu| \right) \right\}, \] where $I_{i,\,j}$ denotes the rate function of $(-1)^i \tilde S_1^{(j)}$. Since the Laplace transform of $S_1$ is finite in $\Rbb^2$ by the assumption, the Laplace transform of each of the random variables $(-1)^i \tilde S_1^{(j)}$ is finite in $\Rbb$. This implies $\lim_{u\,\to\,\infty} I_{i,\,j}(u)/u = \infty$; see~Rockafellar~\cite[Theorems~8.5 and 13.3]{Rockafellar} or Vysotsky~\cite[Eqs.~(5.4) and~(5.5)]{VysotskyNote}. Therefore, for every $\varepsilon >0$, we have \begin{equation}\label{eq: d_T exp zero} \lim_{T\,\to\,\infty} \frac{1}{T} \log \Pbb \left(d_T/T> \varepsilon \right) = -\infty, \end{equation} which means that the sequence of random variables $(d_T/T)_{T >0} $ is \emph{exponentially equivalent} to $0$ as $T \to \infty$ in the sense of~\cite[Definition~4.2.10]{DemboZeitouni}. Finally, let us use that \begin{align*} \left|\sfP_T/T - P_{[T]}/[T] \right| &\le \left|\sfP_T/T - P_{[T]}/T \right| + P_{[T]}\left(1/[T]-1/T\right)\\ &\le 2 \pi d_T/T + \left(P_{[T]}/[T]\right)/T, \end{align*} where the r.h.s.\ is exponentially equivalent to $0$ as $T \to \infty$ by~\eqref{eq: d_T exp zero} and the fact that $ (P_n/n)_{n\,\in\,\N} $ satisfies an LDP in $\Rbb$ with a tight rate function (by Theorem~\ref{thm: perimeter LD}). Therefore, the sequences $\sfP_T/(2T) $ and $ P_{[T]}/(2[T]) $ are exponentially equivalent as $T \to \infty$, hence they satisfy the same LDP by~\cite[Theorem~4.2.13]{DemboZeitouni}, as claimed. As for the areas, the Steiner formula~\eqref{eq: Steiner} yields \[ 0 \le \left(\sfA_T - A_{[T]}\right) \le P_{[T] } d_T + \pi d_T^2, \] and it follows by the same argument as above that $\sfA_T/T^2 $ and $ A_{[T]}/T^2 $ are exponentially equivalent as $T \to \infty$ (use~\eqref{eq: d_T exp zero} and the facts that $ (P_n/n)_{n\,\in\,\N} $ and $ (A_n/n^2)_{n\,\in\,\N} $ satisfy LDPs with tight rate functions). Then it follows from Theorem~\ref{thm: LD area} and~\cite[Theorem~4.2.13]{DemboZeitouni} that $(\sfA_T/T^2)_{T\,\ge\,1} $ and $ (A_{[T]}/[T]^2)_{T\,\ge\,1} $ satisfy the same LDP, as claimed. \end{proof} \subsection{The LDP's under the Cram\'er moment assumption} \label{sec: LDP Cramer} Here we partially extend our main Theorems~\ref{thm: perimeter LD} and~\ref{thm: LD area} under the weaker assumption $0 \in \intr \Dcl_{\Lcl}$. We will use the contraction principle by~Vysotsky~\cite{VysotskyNote}. Denote by $BV[0,1]=BV([0,1];\Rbb^2)$ the set of right-continuous functions of bounded variation from $[0,1]$ to $\Rbb^2$. Denote by $A(h)$ and $P(h)$ respectively the area and the perimeter of $\conv(h([0,1]) \cup \{ 0 \}) $ of an $h \in BV[0,1]$. This extends the definitions given in Section~\ref{Sec: main proofs} for $h \in C_0[0,1]$. Consider the functional \begin{equation}\label{eq: I BV} I_{BV}(h):= \sup_{\bft\,\subset\,(0,1]\,: \, \# \bft\,<\,\infty } I_C(h^{\bft}), \quad h \in BV[0,1], \end{equation} where $h^{\bft}$ denotes the continuous function on $[0,1]$ defined by linear interpolation between its values at $\bft \cup \{0,1\}$ that are given by $h^{\bft}(s) := h(s)$ for $s \in \bft \cup \{1\}$ and $h^{\bft}(0):=0$. This functional satisfies $I_{BV}=I_C$ on $AC_0[0,1]$; see~\cite[Theorem~5.1]{VysotskyNote}, which gives an explicit and transparent formula for $I_{BV}(h)$ in terms of the Lebesgue decomposition of $h$. \begin{prop}\label{prop: Cramer} Assume that $X_1$ is a random vector in the plane such that $0 \in \intr \Dcl_{\Lcl}$. Then the random variables $(P_n/(2n))_{n\,\ge\,1}$ and $(A_n/n^2)_{n\,\ge\,1}$ satisfy the LDP's in $\Rbb$ with speed $n$ and the respective tight rate functions $\tilde{\Jcl}_P $ and $\tilde{\Jcl}_A$ given by \begin{equation}\label{eq: rate func Cramer} \tilde{\Jcl}_P(x):=\cl \inf_{\substack{h\,\in\,BV\,[0,\,1]\,: \\ P(h)\,=\,2x}} I_{BV}(h), \quad \tilde{\Jcl}_A(x):=\cl \inf_{\substack{h\,\in\,BV\,[0,\,1]\,: \\ A(h)\,=\,x}} I_{BV}(h), \quad x \ge 0. \end{equation} These rate functions increase on $[|\mu|, \infty)$ and $[0, \infty)$, respectively. We always have $\tilde{\Jcl}_P = \ubar I$ on $[0, |\mu|]$. Moreover, $\tilde{\Jcl}_P = \ubar I$ if $\ubar I$ is convex. Also, we have $\tilde{\Jcl}_A(a) = \ubar I (\sqrt{2 \pi x})$ for $x \ge 0$ if the distribution of $X_1$ is rotationally invariant. \end{prop} Note that the monotonicity properties of $\tilde{\Jcl}_P$ and $\tilde{\Jcl}_A$ imply that the lower semi-continuous regularizations $\cl$ in~\eqref{eq: rate func Cramer} may change the values of the infima only at the discontinuity points. \begin{proof} Let us equip $BV[0,1]$ with the metric $\rho$ equal the Hausdorff distance between the \emph{completed graphs} of functions, defined by $\Gamma h:= \{(t, x): 0 \le t \le 1, x\linebreak \in [h(t-), h(t)] \}$ for $h \in BV[0,1]$, where $h(0-):=0$. Note that $\Gamma h$ is a compact subset of $[0,1] \times \Rbb^2$ and it uniquely defines $h$, i.e. $\Gamma h_1=\Gamma h_2$ for $h_1, h_2 \in BV[0,1]$ implies $h_1=h_2$. The total variation of an $h \in BV[0,1]$, given by $\Var(h)\linebreak:= \sup_{\bft \,\subset\,(0,\,1]\,: \, \# \bft\,<\,\infty} \Var(h^{\bft})$, is simply the length of the spatial coordinate of any continuous bijective parametrization of~$\Gamma h$. It follows from Steiner's and Cauchy's formulas~\eqref{eq: Steiner} and~\eqref{eq: Cauchy} that the functionals $A$ and $P$ are continuous in the metric $\rho$ and moreover, they are \emph{uniformly} continuous on the sets $\{h \in BV[0,1]: \Var(h)\le R\}$ for every $R>0$. Therefore, the contraction principle for the trajectories $S_n(\cdot)$ in $BV[0,1]$, given by~\cite[Theorem~3.3]{VysotskyNote} (which uses a metric longer than $\rho$, see~\cite[Eqs.~(2.6) and~(2.7)]{VysotskyNote}), yields the LDPs stated with the respective rate functions given in~\eqref{eq: rate func Cramer}. The rest of the proof is identical to the ones of the corresponding parts of Theorems~\ref{thm: perimeter LD} and~\ref{thm: LD area}. We comment only on the differences. The monotonicity properties of $\tilde{\Jcl}_P$ and $\tilde{\Jcl}_A$ follow from equalities~\eqref{eq: I BV} and~\eqref{eq: rate func Cramer}. We get only non-strict monotonicity since we are not claiming that the infima in~\eqref{eq: rate func Cramer} are always attained, as opposed to the main case $\Dcl_{\Lcl}=\Rbb^2$. Furthermore, by~\eqref{eq: I BV} and Jensen's inequality, we have $I_{BV}(h) \ge I(h(1))$ for any $h \in BV[0,1]$. Moreover, if $\ubar I$ is convex, we have $I_{BV}(h) \ge \ubar I(\Var(h))$. This follows from~\eqref{eq: I_C >=} and~\eqref{eq: I BV} using lower semi-continuity of $\ubar I$ (Lemma~\ref{lem: properties of I_}$\MK$\eqref{item: I_ unimodal}) if we choose an increasing sequence $({\bft}_n)_{n\,\ge\,1}$ of finite subsets of $(0,1]$ such that $I_C(h^{{\bft}_n}) \to I_{BV}(h)$ and $\Var(h^{{\bft}_n}) \to \Var(h)$ as $n \to \infty$. The two inequalities above for $I_{BV}(h)$ yield, as in the proof of Theorem~\ref{thm: perimeter LD}, that $\inf_{h\,:\,P\,(h)\,=\,2x} I_{BV}(h) \ge \ubar I(x)$ for any $x \in [0, |\mu|]$ and also for $x \ge |\mu|$ if $\ubar I$ is convex. Hence $\tilde{\Jcl}_P(x) \ge \ubar I(x)$ for such $x$ since $\ubar I$ is lower semi-continuous. On the other hand, we have \[ \tilde{\Jcl}_P(x) \le \inf_{\substack{h\,\in\,BV\,[0,\,1]\,: \,\Pbb(h)\,=\,2x }} I_{BV}(h) \le \inf_{\substack{h\,\in\, AC_0\,[0,\,1]\,: \\ P(h)\,=\,2x }} I_{BV}(h) = \inf_{\substack{h\,\in \,AC_0\,[0,\,1]\,:\,\\ P(h)\,=\,2x }} I_C(h) \le \ubar I(x), \] where we used that $I_{BV} = I_C$ on $AC_0[0,1]$. This yields the claims on $\tilde{\Jcl}_P $. Similarly, if $\ubar I$ is convex, which is surely the case when the distribution of $X_1$ is rotationally invariant, then we have $I_{BV}(h) \ge \ubar I(\Var(h))$ for $h \in BV[0,1]$, hence $\inf_{h\,:\,A(h)\,=\,a} I_{BV}(h) \ge \ubar I (\sqrt{2 \pi a})$ for $a \ge 0$ by the same argument as in the proof of Theorem~\ref{thm: LD area}. Hence $\tilde{\Jcl}_A(a) \ge \ubar I (\sqrt{2 \pi a})$ for $a \ge 0$. On the other hand, for rotationally invariant distributions of $X_1$ we have $\tilde{\Jcl}_A(a) \le \ubar I (\sqrt{2 \pi a})$, arguing as above for $\tilde{\Jcl}_P(x) \le \ubar I(x)$. This yields the claim on~$\tilde{\Jcl}_A $. \end{proof} \appendix \section{} \paragraph{Perimeters} Throughout the paper, by the \emph{perimeter} $P(C)$ of a non-empty convex set $C$ on the plane we mean the length of its boundary unless $C$ is a line segment, in which case $P(C)$ is its doubled length. Recall that a continuous curve in $\Rbb^d$ is \emph{rectifiable} if it has finite length (equivalently, it has bounded variation). The following simple proposition is proved in our separate note~\cite{AkopyanVysotskyGeometry}, which was initially motivated by the questions concerning the perimeter of the convex hulls considered in the present paper. For the reader's convenience, we present the result here. Its main use here is in the corollary, which not only gives the ``folklore'' inequality for the half-perimeter but also specifies all instances when the equality is attained. \begin{prop}\label{prop:length of curve} Let $\gamma$ be a rectifiable curve in $\Rbb^2$, and let $\Gamma$ denote its convex hull. Then \[ \length \gamma \geq \per \Gamma - \diam \Gamma. \] \end{prop} \goodbreak \begin{coro}\label{cor:length of curve} It holds that \[ \length \gamma \geq \frac12 \per \Gamma, \] and equation can be attained only if $\gamma$ parametrizes is a line segment. \end{coro} \begin{rema}\label{rem: mean width} These statements remain valid if we replace $\Rbb^2$ by $\Rbb^d$ (with any $d \ge 2$) and $\per \Gamma$ by $\tfrac{d v_d}{v_{d-1}} W(\Gamma)$, where $v_d$ denotes volume of a unit ball in $\Rbb^d$ and \begin{equation}\label{eq: mean width} W(\Gamma):= \frac{1}{|\Sbb^{d-1}|} \int_{\Sbb^{d-1}} w_\ell(\Gamma) d \ell \end{equation} is \emph{mean width} of $\Gamma$, with $w_\ell (\Gamma)$ being width of $\Gamma$ in the direction $\ell$ i.e. length of the projection of $\Gamma$ on the line passing through the origin in the direction $\ell$. The normalizing factor corresponds to mean width $\tfrac{2 v_{d-1}}{d v_d}$ of a unit segment in $\Rbb^d$. \end{rema} It is easy to prove the remark using {\it Crofton's formula} (Schneider and Weil~\cite[Eq.~(5.32)]{SchneiderWeil}) \[ \length \gamma = \frac{1}{v_{d-1}}\;\iint \limits_{\Sbb^{d-1} \Rbb_+}n_{\gamma}(\ell, r) d\ell dr, \] where $n_\gamma(\ell, r)$ denotes the number of intersections of $\gamma$ with the hyperplane perpendicular to the direction $\ell$ at the distance $r$ from the origin. Indeed, consider the closed curve $\gamma'$ obtained by joining the end points of $\gamma$ by a line segment. Almost every hyperplane intersecting $\Gamma$ intersect $\gamma'$ at least at two points since $\conv(\gamma')=\Gamma$. It remains to use that $|\Sbb^{d-1}| = d v_d$. Note that Crofton's formula implies \emph{Cauchy's formula} for the perimeter of the planar convex set $\Gamma$: \begin{equation}\label{eq: Cauchy} \per \Gamma = \frac12 \int_{\Sbb^1} w_\ell(\Gamma) d\ell. \end{equation} \paragraph{Areas} Let $C \subset \Rbb^2$ be a non-empty bounded convex set and let $B \subset \Rbb^2$ be the closed unit ball centred at the origin. \emph{Steiner's formula} (\cite[Eq.~(14.5)]{SchneiderWeil}) asserts that for every $r>0$, \begin{equation}\label{eq: Steiner} A(C+r B) = A(C) + P(C) r + \pi r^2, \end{equation} where `$+$' stands for Minkowski addition of sets. \paragraph{Measurability} Let us show that the perimeters and areas $(P_n)_{n\,\in\,\N}, (A_n)_{n\,\in\,\N}$, $(\sfP_T)_{T >0}$, $(\sfA_T)_{T >0}$ of the convex hulls, introduced in Sections~\ref{Sec: intro} and~\ref{sec: continuous time}, are measurable. It follows from~\eqref{eq: Cauchy} and~\eqref{eq: Steiner} that for every $n \in \N$, the mappings \[ (x_1,\,\ldots,\,x_n) \mapsto P(\conv(0, x_1,\,\ldots,\,x_n)) \, \text{ and } \, (x_1,\,\ldots,\,x_n) \mapsto A(\conv(0, x_1,\, \ldots,\,x_n)) \] are continuous from $\Rbb^{2 \times n} $ to $\Rbb$. Hence $P_n$ and $A_n$ are random variables. Furthermore, for any $T>0$ and a dense subset $\{t_k\}_{k \in \N}$ of $[0, T]$ that includes $T$, \begin{align*} \cl \sfC_T &= \cl (\conv(\{S_t\}_{0\,\le\,t\,\le\,T}))= \conv (\cl (\{S_t\}_{0\,\le\,t\,\le\,T})) \\ &= \conv (\cl (\{S_{t_k}\}_{k\,\in\,\N}))\;\:\, = \cl (\conv (\{S_{t_k}\}_{k\,\in\,\N})) \quad \text{a.s.,} \end{align*} where the second and the fourth equalities hold true by~\cite[Theorem~17.2]{Rockafellar}, which applies because the trajectories of a L\'evy process are bounded a.s.\ on any interval, and in the third equality we used that the trajectories are right-continuous and have left limits a.s. Then $\cl \sfC_T= \cl (\cup_{k=1}^\infty \conv (S_{t_1}\,\ldots,\,S_{t_k}))$ by Carath\'eodory's theorem (\cite[Theorem~17.1]{Rockafellar}). Hence, since the union on the r.h.s.\ is a convex set, we have \[ \sfA_T=A(\sfC_T)= A \left(\mathop\cup_{k=1}^\infty \conv \left(S_{t_1} \,\ldots,\,S_{t_k}\right)\right) = \lim_{k\,\to\,\infty}A\left(\conv \left(S_{t_1}\,\ldots,\,S_{t_k}\right)\right), \] and by the above, $\sfA_T$ is measurable as a limit of measurable functions. Also, for any $\ell \in \Sbb^1$, \[ w_\ell (\sfC_T)= w_\ell \left(\mathop\cup_{k=1}^\infty \conv \left(S_{t_1}\,\ldots,\,S_{t_k}\right)\right)= \lim_{k\,\to\, \infty} w_\ell \left(\conv \left(S_{t_1}\,\ldots,\,S_{t_k}\right)\right), \] which yields $\sfP_T = \lim_{k\,\to\,\infty}P(\conv (S_{t_1}\, \ldots,\,S_{t_k}))$ by~\eqref{eq: Cauchy} and the monotone convergence theorem. Hence $\sfP_T $ is measurable as a limit of measurable functions. \subsection*{Acknowledgements} We are grateful to Andrew Wade for bringing the perimeter problem to our attention, and to Endre Makai for referring us to the paper~\cite{pach1978isoperimetric} by J\'anos Pach. We wish to thank Fedor Petrov for showing us a simple proof of Proposition~\ref{prop: generalization}$\MK$\eqref{item: conv I_ =}. We are indebted to the anonymous referees for their comments and the suggestion to include continuous time results. \bibliography{AHL_Akopyan} \end{document}