September 24, 2023

– Advertisement –

[This article was first published on YoungStatS, and kindly contributed to R-bloggers], (You can report content on this page here) Want to share your content on R-Bloggers? Click here if you have a blog, or click here if you don’t have one.

– Advertisement –

Commutative arrays have been studied since the late 70s (Aldous (1983), Kallenberg (2005)). Eagleson & Weber (1978) and Silverman (1976) established the strong law of large numbers and the central limit theorem for such arrays. Because non-sparse networks and multiway clustering are related to commutative arrays, they have attracted recent attention in statistics and econometrics (Davezies, D’Haultfœuille, and Guyonvarch (2018), Davezies, D’Haultfœuille, and Guyonvarch (2021) , Menzel (2018)). We focus on non-sparse networks below and present congruence results based on asymptotic normality of several non-linear estimators. We also show the general validity of the bootstrap scheme adapted to such data.

Non-sparse networks, dyadic data and commutativity

– Advertisement –

Dyadic data are random variables (or vectors) \(Y_{i,j}\) \(i\) and \(j\), indexed by two units from the same population. For example, \(Y_{i,j}\) can be exported from country \(i\) to country \(j\). Another example is taken from digital networks where \(Y_{i,j}\) can be, for example, the number of messages from \(i\) to \(j\) . It is useful to represent this kind of data as arrays:

– Advertisement –

In this set-up, \(n(n-1)\) variables are potentially dependent because \(Y_{i,j}\) is possibly correlated with \(Y_{i’,j’}\) If \(\{i,j\}\cap\{i’,j’\}\neq \emptyset\). In the first example, exports from \(i\) to \(j\) are correlated with exports from \(j\) to \(i\) , but other exports of \(i\) or \ Or even with import (J\). Considering the important aspects of iid sampling, the following assumptions allow for such correlations: \[\begin{align*}
\textbf{Joint exchangeability: } & (Y_{i,j})_{(i,j)\in \mathbb{N}^{\ast 2}, i\neq j}\text{ has the same distribution as }(Y_{\pi(i),\pi(j)})_{(i,j)\in \mathbb{N}^{\ast 2}, i\neq j}, \\
& \text{ for any permutation }\pi \text{ of }\mathbb{N}^{\ast}, \\
\textbf{Dissociation: } & (Y_{i,j})_{(i,j)\in \{1,…,k\}^2, i\neq j}\text{ and } (Y_{i,j})_{(i,j)\in \{k+1,k+2,…\}^2, i\neq j} \text{ are independent,} \\ & \text{for any }k \in \mathbb{N}^{\ast}.

See also  An R function to calculate tumor mutational burden (TMB). R bloggers

These ideas generalize to “iidness”: if \((X_i)_{i\geq 1}\) is iid, then \(Y_{i,j}=X_i\), for any \(i\neq j \) , jointly defines interchangeable and distinct arrays.

Simple Law of Large Numbers (LLN) and Central Limit Theorem (CLT)

The following results generalize the general LLN and CLT for iid data to jointly commutative and distinct arrays:

These results actually hold for “multidic” data, ie. Data indexed by \(k\)-tuples instead of pairs. It has given good results. If the variables \((Y_{i,j})_{i\neq j}\) are jointly commutative and distinct, then the variables \(Z_{i,j,k}=(Y_{i,j}+ Y_) {j,i})(Y_{i,k}+Y_{k,i})’\) (also \(i, j\) and \(k\) are all distinct). Then by LLN above, we have, if \(\mathbb{E}\left(|Y_{1,2}|^2\right), \[\frac{1}{n(n-1)(n-2)}\sum_{1\leq i,j,k\leq n}Z_{i,j,k}\xrightarrow{a.s.}\mathbb{E} \left((Y_{1,2}+Y_{2,1})(Y_{1,3}+Y_{3,1})’\right).\]
Next, if we \(\overline{Y}_i=\frac{1}{n-1}\sum_{\substack{1\leq j \leq n\\ j\neq i}}(Y_{i, j}+Y_{j,i})\) and \(\overline{Y}=\frac{1}{n}\sum_{i=1}^{n}\overline{Y}_{i}\ ), we obtain \[\widehat{V}=\frac{1}{n}\sum_{1\leq i \leq n}(\overline{Y}_i-\overline{Y})(\overline{Y}_i-\overline{Y})’\xrightarrow{a.s.} V.\]
Therefore, the hypothesis \(\mathbb{E}(Y_{1,2})=\theta_0\) is valid asymptotic using t-test or F-test \(\widehat{V}\) as Only \( V\) is non-singular.

uniform lln and clt

Previous results are nevertheless insufficient in many cases, especially with nonlinear (e.g., M- or Z-) estimators. A common way to deal with problems like this is to introduce “simple” LLN and CLT uniforms over functions of suitable classes. In particular, for \(f\in \mathcal{F}\) a class of bounded functions with values ​​in \(\mathbb{R}^k\), \(\mathbb{P} _n(f )=\frac {1}{n(n-1)}\sum_{1\leq i,j\leq n}f(Y_{i,j})\) and \(\mathbb{G} _n(f )=\sqrt {n} \left(\mathbb {P} _ n(f) -E (f (Y_{1,2})) \right)\). The class \(\mathcal{F}\) is called Glivenko-Cantelli if, originally, \[\begin{align}%\text{ almost-surely and in }L^1 :
\lim_n \sup_{f\in \mathcal{F}}\left|\mathbb{P}_n(f)-\mathbb{E}(f(Y_{1,2}))\right|\xrightarrow{a.s.} 0.\label{eq1}\tag{1}
\end{align}\] Similarly, for the distribution of the class \(\mathcal{F}\) \((Y_{i,j})_{i\neq j\geq 1}\) is Donsker if \[\begin{align}\mathbb{G}_n\stackrel{d}{\longrightarrow}\mathbb{G},\label{eq2}\tag{2}\end{align}\]
A Gaussian process with \(\mathbb{G}\) indexed on \(\mathcal{F}\) .

See also  Zombie Forest: Mapping Tree Migration with R Shiny | R bloggers

The results \(\eqref{eq1}\) and \(\eqref{eq2}\) for the iid data under different conditions are shown. We consider two standard ones involving the so-called covering numbers (\mathcal{F}\). First, we introduce additional notation. For any \(\eta > 0\) and any seminorm \(||\cdot||\) \(\mathcal{F}\), \(N(\eta, \mathcal{F}, | |\cdot||)\) denotes the minimum number of \(||\cdot||\) – \(\mathcal{F}\) required to cover closed balls of radius \(\eta\) with centers in \(\math{f}\). The seminorms we will consider next are \(|f|_{\mu,r} = \left(\int |f|^rd\mu\right)^{1/r}\) for any \( \) for r \geq 1) and the probability measure \(\mu \). Then, \(F\) is an envelope of a measurable function \(F\) satisfying \(F(u) \geq \sup_{f\in \mathcal{F}} |f(u)|\) . Let \(\mathcal{Q}\) denote the set of probability measures with finite support. Finally, \(P\) denotes the distribution of a random variable.

To keep with the iid data, the terms \(\eqref{eq1}\) and \(\eqref{eq2}\) are:

Condition \(CG(\mathcal{F})\): \(\forall \eta>0, \sup_{Q\in \mathcal{Q}}N\left(\eta|F|_{Q,1} ,\mathcal{F},|.|_{Q,1}\right) with \(|F|_{P,1}.

Condition \(\mathcal{F})\): \(\int_0^{+\infty}\sup_{Q\in \mathcal{Q}}\sqrt{\log N\left(\eta|F| _{ Q,2},\mathcal{F},||_{Q,2}\right)}d\eta with \(|F|_{P,2}.

Remarkably, these results extend directly to jointly commutative and distinct arrays:

If the condition \(CG(\mathcal{F})\) holds then \(\eqref{eq1}\) holds.

If the condition \(\mathcal{F})\) holds then \(\eqref{eq2}\) holds, in which \(\mathbb{G}\) is a covariance kernel defined by \(K\) \(\mathbb{G}\)[K(f_1,f_2)=Cov\left(f_1(Y_{1,2})+f_1(Y_{2,1}),f_2(Y_{1,3})+f_2(Y_{3,1})\right).\]

The Uniform LLN and CLT also use the standard terms on bracketing numbers rather than covering the numbers mentioned above.

The take-away: The main asymptotic results used to establish properties of nonlinear estimators with iid data are also jointly commutative and separable arrays.

Therefore, asymptotically normal estimators with iid data are also jointly commutative and asymptotic normal with distinct arrays. The difference lies only in their rate of convergence and their asymptotic variance \(V\). If \(V\) is not singular, then the estimate can be based on a consistent estimate of \(V\), as explained above. But a simple bootstrap scheme can also be used.

See also  U.S. House China panel holds first hearing after lawmakers push seven bills targeting Beijing

bootstrapping swap arrays

The key message here is that even though the parameters of interest depend on the “edges” distribution, we must bootstrap vertices!
In particular, consider \((1^{\ast}, 2^{\ast},…,n^{\ast})\) and \(\{1,…,n\}\) from Consider a uniformly drawn iid sample. , and consider the bootstrap sample \((Y_{i^{\ast},j^{\ast}})_{i\neq j, i^*\neq j^*}\).

To illustrate the bootstrap scheme, imagine a 5-vertice network, and the bootstrap sample of the vertices \((1^{\ast},2^{\ast},3^{\ast},4^{\ ast}, 5^{\ast})=(2,1,2,5,4)\), the corresponding bootstrapped network is:

Figure 1: \((1^{\ast},2^{\ast},3^{\ast},4^{\ast},5^{\ast})=(2,1, 2,5 ,4)\)}

The bootstrapped average is \(\mathbb{P}_n^{\ast}(f)=\frac{1}{n(n-1)}\sum_{1\leq i,j\leq n}f(Y_ { i^{\ast},j^{\ast}})\mathbb{1}_{\{i^{\ast}\neq j^{\ast}\}}\), while the bootstrap procedure is \( \mathbb {G} _n ^{\ast }(f) = \sqrt {n} \left(\mathbb {P} _n ^{\ast }(f) – \mathbb {P} _n(f) \right) \).

As is the case with iid data, if the condition \(\mathcal{F})\) holds, then the above bootstrap scheme leads to consistent conclusions. In particular, we have \((Y_{i,j})_{i,j\in \mathbb{N}^2}\) conditional on and almost certainly, \[\begin{align}\mathbb{G}_n^{\ast}\stackrel{d}{\longrightarrow}\mathbb{G},\end{align}\]
where \(\mathbb{G}\) is the same Gaussian process as above. This ensures the validity of the bootstrap in many nonlinear contexts.


Aldous, DJ 1983. Interchangeability and related matters. Springer Verlag.

Devezies, Laurent, Xavier d’Houltfeuille, and Yannick Guionvarch. 2018. “Asymptotic Results Under Multiway Clustering.” ArXiv e-Prints, Eprint 1807.07925.

, 2021. “Empirical Process Results for Exchangeable Arrays.” History of Statistics.

Eagleson, GK, and NC Weber. 1978. Limit theorems for weekly exchangeable arrays. Mathematical Proceedings of the Cambridge Philosophical Society.

Kallenberg, O. 2005. Probabilistic Symmetries and Invariance Principles. Springer.

Mengele, Konrad. 2018. “Bootstrap with Clustering in Two or More Dimensions.” working Paper.

Silverman, BW. 1976. “Empirical Process Results for Exchangeable Arrays.” Advances in Applied Probability.


Source link

– Advertisement –