1. Introduction
The origin of free probability theory can be traced back to Voiculescu’s works around 1985, and one of the key discoveries is the so-called asymptotic freeness of independent random matrices with Gaussian entries [Reference Voiculescu21]. This phenomenon extends beyond the Gaussian models and applies to various other models as well. Amongst them are non-Gaussian Wigner matrices [Reference Dykema5], independent Haar unitary random matrices [Reference Voiculescu22], random permutation matrices [Reference Nica13], etc.
It is worth noting that all the results mentioned above are assuming independence between the random matrices, resulting in the phenomenon of asymptotic freeness. A natural question arising from this perspective is whether there are fundamentally different approaches to obtaining asymptotic freeness. A positive answer to this question is obtained from the partial transposition [Reference Mingo and Popa10], which plays a crucial role in quantum information theory (QIT). Indeed, partial transposition is crucial in the problem of entanglement of quantum states and quantum channels [Reference Choi2, Reference Horodecki, Horodecki and Horodecki6, Reference Peres15, Reference Woronowicz23], as well as in computing the transmission rate of information [Reference Peter16, Reference Smith and John19], PPT2 conjecture [Reference Chen, Yang and Tang1, Reference Christandl3, Reference Christandl, Müller-Hermes and Wolf4, Reference Kennedy, Manor and Paulsen7, Reference Rahaman, Jaques and Paulsen17], and so forth.
This paper focuses on partial transposes of Wishart random matrices, which arise naturally in the context of QIT since the normalizations of Wishart matrices are standard models for random quantum states [Reference Michael8, Reference Samuel18, Reference Sommers and Zyczkowski20, Reference Zyczkowski and Sommers24]. An important recent discovery is the asymptotic freeness between partial transposes of a Wishart matrix in the bipartite situation [Reference Mingo and Popa10]. Let
$d_1,d_2$ and p be natural numbers, and let
$G_{d_1d_2,p}$ be a
$d_1d_2\times p$ random matrix with independent complex Gaussian random variables whose mean and variance are 0 and 1, respectively. Then the Wishart matrix
$W_{d_1d_2,p}$ is given by

Let us denote by Td or simply by T the transpose map
$A\mapsto A^t$ on
$M_d(\mathbb{C})$ if there is no possibility of confusion. Then the partial transposes of
$W_{d_1d_2,p}\in M_{d_1d_2}(\mathbb{C})\cong M_{d_1}(\mathbb{C})\otimes M_{d_2}(\mathbb{C})$ in the bipartite situation are given by

One of the main results of [Reference Mingo and Popa10] is that the family

is asymptotically free under the assumption
$\lim d_1=\infty=\lim d_2$ with
$\displaystyle \lim \frac{p}{d_1d_2}=c\in (0,\infty)$. From the QIT perspective, it is natural to consider a multipartite scenario of quantum communication. Indeed, the bipartite setting is the standard framework to model interactions between two parties, and it is standard to use a multi-fold tensor product to describe possible interactions between multiple parties. In the general n-partite situation, we have 2n types of partial transposes of

given by

where
$\sigma=(\sigma_1,\sigma_2,\cdots,\sigma_n)$ is an arbitrary element of
$\left\{0,1\right\}^n$. This paper focuses on two research questions for these partial transposes
$W^{\sigma}_{d_1\cdots d_n,p}$. The first main question is as follows.
Question 1. Is the family
$\left\{W^{\sigma}_{d_1\cdots d_n,p}\right\}_{\sigma\in \left\{0,1\right\}^n}$ of partial transposes asymptotically free in the general n-partite situation assuming
$ \lim d_j=\infty$ for all
$j=1,2,\cdots,n$ with
$\lim \frac{p}{d_1\cdots d_n}=c\in (0,\infty)$? What about almost sure asymptotic freeness?
Note that a partial positive answer to the above Question 1 can be obtained from a recent paper [Reference Mingo and Popa11]. Indeed, [Reference Mingo and Popa11, Corollary 4.15] provides an asymptotically free family consisting of 2n partial transposes out of 2n choices. We establish the positive answer with full generality to this problem in Theorem 2.5 where we prove almost sure asymptotic freeness for the whole family of partial transposes
$\left\{W^{\sigma}_{d_1\cdots d_n,p}\right\}_{\sigma\in \left\{0,1\right\}^n}$. Then, an important advantage of this shift to the multipartite setting is that we have a limitless number of asymptotically free partial transposes
$W^{\sigma}_{d_1\cdots d_n,p}$, so it becomes possible to discuss the following problem.
Question 2. If the partial transposes are asymptotically free, then is it possible to establish a natural analogue of the central limit theorem?
To do this, in Section 3, we consider an arbitrary sequence of finite length

and denote by
$\displaystyle \mu(\mathbf{d})= \min_{j\in [n]} d_{j}\geq 2$ and
$p=p(\mathbf{d})$. An important difference from Section 2 is that the length
$n=n(\mathbf{d})$ varies depending on d. The main object in Section 3 is the following average after centring

for certain subsets
$B_\textbf{d}\subseteq \left\{0,1\right\}^{n(\textbf{d})}$. In Theorem 2.3 (2), we prove that
• if
$\lim |B_\textbf{d}|=\infty$ and
• if
${\displaystyle \lim }~ |B_\textbf{d}|^m \left ( \frac{1}{\mu(\bf d)}+\left | \frac{p(\textbf{d})}{d_1d_2\cdots d_n}-c\right | \right )=0$ for all natural numbers m,
then
$(s_\textbf{d})_\textbf{d}$ converges in moments to the semicircular element of the mean 0 and the variance c, i.e.

2. Asymptotic freeness of partial transposes
Let us begin with generalizing some notations and terminologies in [Reference Mingo and Popa10] to the multipartite setting. Let p and
$d_1, d_2, \cdots, d_{n}$ be natural numbers with
$n\geq 2$, and let

be a
$d_1 d_2\cdots d_{n}\times p$ random matrix whose entries are independent complex Gaussian random variables with mean 0 and variance 1. We denote by
$[d]=\left\{1,2,\cdots,d\right\}$ for any natural number d, and
$[d_1d_2\cdots d_n]=[d_1]\times [d_2]\times \cdots \times [d_n]$ for simplicity. Using a canonical linear isomorphism

let us write
$G=\displaystyle \sum_{\textbf{i}\in [d_1d_2\cdots d_{n-1}]}e_\textbf{i}\otimes G_\textbf{i}$ and
$\displaystyle G_\textbf{i}=\sum_{x=1}^{d_{n}}\sum_{y=1}^p g^{(\bf i)}_{x,y}e_{x,y}\in M_{d_{n},p}(\mathbb{C})$. Then the Wishart matrix
$W=\frac{1}{d_1 \cdots d_{n}}GG^*\in M_{d_1d_2\cdots d_{n}}(\mathbb{C})$ is given by

Note that we should consider 2n-types of partial transpositions of W in the n-partite situation. For any
$\sigma=(\sigma_1, \sigma_2, \cdots, \sigma_{n}) \in \{0, 1 \}^{n}$, we define the associated partial transposition

where Ti is the transpose operator on each
$M_{d_{i}}(\mathbb{C})$.
To compute (non-commutative) joint moments of the partial transposes, let us consider a
$\mathbb{Z}_2$-valued m × n matrix
$\epsilon=(\epsilon_{ij})_{i\in [m], j \in [n]}$. Then there exist m rows sequences
$\epsilon_i=(\epsilon_{ij})_{j=1}^{n}\in \left\{0,1\right\}^{n}$ and their associated partial transposes are given by

2.1. Joint moments of partial transposes
In this section, we discuss the k-th moments
$\mathbb{E}(X_{\epsilon}^k)$ of the following random variable

Here,
$\text{tr}=\frac{1}{d}\text{Tr}$ is the normalized trace on
$M_d(\mathbb{C})$ and
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ is a
$\mathbb{Z}_2$-valued m × n matrix with
$\epsilon_i=(\epsilon_{ij})_{j=1}^n\in \left\{0,1\right\}^n$. It is unclear whether Xϵ is a real-valued random variable for now, but it will be explained later in Appendix A.
Recall that [Reference Mingo and Popa10, Theorem 3.7] covers the case
$(n,k)=(2,1)$, and our focus is about the general cases of (n, k). For any natural numbers k and m, let us introduce some elementary permutations on

as follows. Recall that the following permutations


on
$[\pm m]$ were introduced in [Reference Mingo and Popa10] to prove asymptotic freeness of partial transposes in the bipartite situation. We define their natural extensions
$\Delta^{(k)}$ and
$\Gamma^{(k)}$ on
$[\pm km]\cong [k]\times [\pm m]$ as the product maps


Then, it is immediate to see that their cycle decompositions on
$[\pm km]$ are given by


While the m row sequences
$\epsilon_1,\cdots,\epsilon_m$ of
$(\epsilon_{ij})_{i\in [m],j\in [n]}$ were used to describe multiple partial transposes
$W^{\epsilon_1},W^{\epsilon_2},\cdots,W^{\epsilon_m}$, let us use the j-th column
$\epsilon_j'=(\epsilon_{ij})_{i\in [m]}\in \left\{0,1\right\}^m$ to define a permutation
$\mathcal{E}_j$ on
$[\pm m]$ by

Additionally,
$\mathcal{E}_j$ extends to a permutation

given by
$\mathcal{E}^{(k)}_j(s,s') = (s,\mathcal{E}_j(s'))$.
Now, we are ready to provide an explicit formula for the following k-th moments

generalizing [Reference Mingo and Popa10, Theorem 3.7] with full generality under the following notations.
Notation 2.1 Note that any permutation
$\sigma\in S_m$ is associated with a partition π of
$[m]$ using the cyclic decomposition of σ. We denote by
$\sharp(\sigma)$ the number of blocks of π, and denote by
$\pi \vee \pi'$ the supremum of two partitions π and
$\pi'$. When we regard
$\sigma\in S_m$ as a permutation on
$[\pm m]$, the extension is considered the identity function on
$[-m]=\left\{-m,\cdots,-1\right\}$.
Our proof for the following theorem is systematic but requires heavy use of notations, so let us present the proof separately in Appendix A.
Theorem 2.2 Let
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ be a
$\mathbb{Z}_2$-valued m × n matrix with
$\epsilon_i=(\epsilon_{ij})_{j=1}^{n}\in \left\{0,1\right\}^{n}$ for all
$i\in [m]$, and let
$X_{\epsilon}=\text{tr}(W^{\epsilon_1}W^{\epsilon_2}\cdots W^{\epsilon_m})$. Then, for any natural number k, we have

where the exponent
$f_{k,j}(\epsilon,\sigma)$ is given by

for all
$\sigma\in S_{km}$ and
$j\in [n]$.
Recall that we have

for any pairings
$\pi_{1}, \pi_{2} \in \mathcal{P}_2(n)$ by [Reference Mingo and Popa9, Lemma 2], and both the permutations
$\mathcal{E}^{(k)}_{j} \Gamma^{(k)} \Delta^{(k)} (\Gamma^{(k)})^{-1} \mathcal{E}^{(k)}_{j}$ and
$\sigma \Delta^{(k)} \sigma^{-1}$ are indeed pairings. Thus, our main focus from now on is to analyze



2.2. Almost sure asymptotic freeness in the multipartite setting
To establish the almost sure asymptotic freeness, our main technical question is how to compute the exponents
$f_{k,j}(\epsilon,\sigma)$. Let us write
$f_j=f_{1,j}$ for simplicity if there is no possibility of confusion. Recall that the case
$(n,k)=(2,1)$ was studied in [Reference Mingo and Popa10] for the bipartite situation. To consider the general cases of (n, k), it is necessary to develop a new framework to study the general situation
$k\geq 2$.
Let us consider the following family of sets

for general k, where
$A_j= [jm]\setminus [(j-1)m]=\left\{(j-1)m+1,\cdots, jm\right\}$. We also denote by
$\langle \mathcal{A}_k \rangle := \{\cup_{A \in \mathcal{S}}A : \mathcal{S} \subseteq \mathcal{A}_k \}$. Then the main theorem of this section is stated as follows.
Theorem 2.3 Let
$\sigma \in S_{km}$ and let
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ be a
$\mathbb{Z}_2$-valued m × n matrix.
(1) Assume that
$k\geq 2$ and there exist non-empty disjoint subsets
$C_1\in \langle\mathcal{A}_k\rangle$ and
$C_2\in \langle\mathcal{A}_k\rangle$ such that
$\sigma(C_1)=C_1$,
$\sigma(C_2)=C_2$ and
$[km]=C_1\cup C_2$. Let
$|C_1|=k_1m$ and
$|C_2|=k_2m$ and consider bijective increasing functions
$c_1 : [k_1m] \rightarrow C_1$ and
$c_2 : [k_2m] \rightarrow C_2$. Then we have
(2.24)\begin{align} &\sharp\left (\mathcal{E}^{(k)}_{j} \Gamma^{(k)} \Delta^{(k)} (\Gamma^{(k)})^{-1} \mathcal{E}^{(k)}_{j} \vee \sigma \Delta^{(k)} \sigma^{-1}\right ) \end{align}
(2.25)\begin{align} &=\sum_{i=1,2}\sharp\left (\mathcal{E}^{(k_i)}_{j} \Gamma^{(k_i)} \Delta^{(k_i)} (\Gamma^{(k_i)})^{-1} \mathcal{E}^{(k_i)}_{j} \vee (c_i ^{-1} \sigma c_i) \Delta^{(k_i)} (c_i ^{-1} \sigma c_i)^{-1}\right ) \end{align}
for all
$j\in [n]$. In particular, we have
(2.26)\begin{equation} f_{k,j}(\epsilon,\sigma)=f_{k_1,j}(\epsilon,c_1^{-1}\sigma c_1)+f_{k_2,j}(\epsilon,c_2^{-1}\sigma c_2). \end{equation}
(2) Assume that
$k\geq 2$ and there are no non-empty disjoint subsets
$C_1\in \langle \mathcal{A}_k\rangle$ and
$C_2\in \langle\mathcal{A}_k\rangle$ such that
$\sigma(C_1)=C_1$,
$\sigma(C_2)=C_2$ and
$[km]=C_1\cup C_2$. Then we have
(2.27)\begin{equation} f_{k,j}(\epsilon,\sigma)\leq 2-2k\leq -2 \end{equation}
for all
$j\in [n]$.
A proof of the above Theorem 2.3 will be presented in the next subsection 2.3. In this section, let us focus on how this result is applied to prove almost sure asymptotic freeness of the partial transposes
$\left\{W^{\sigma}\right\}_{\sigma\in \left\{0,1\right\}^n}$. To proceed, let us recall an important lemma from [Reference Mingo and Popa10]. For a function
$x:[m] \rightarrow Y$, let us denote by

In particular,
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ can be understood as a function
$i\in [m]\mapsto \epsilon_i\in \left\{0,1\right\}^n$, so
$\text{ker}(\epsilon)$ is a partition of
$[m]$.
Lemma 2.4. Let
$\sigma \in S_{m}$ and let
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ be a
$\mathbb{Z}_2$-valued m × n matrix with
$\epsilon_j'=(\epsilon_{ij})_{i\in [m]}\in \left\{0,1\right\}^m$.
(1) Then
$f_j(\epsilon,\sigma)=f_{1,j}(\epsilon,\sigma) \lt 0$ holds unless
$\epsilon_j'$ is constant on the cycles of σ.
(2) If
$\epsilon_j'$ is constant on the cycles of σ, then
$f_j(\epsilon,\sigma) \le 0$ with equality holds precisely when the associated partition of σ is non-crossing.
In particular, if
$f_j(\epsilon,\sigma)\equiv 0$ for all
$j\in [n]$ and if π is the associated partition of σ, then π is non-crossing and
$\text{ker}(\epsilon)\geq \pi$ holds, i.e. each block of π is contained in a block of
$\text{ker}(\epsilon)$.
Then, applying Theorem 2.3 with Lemma 2.4, we reach the following almost sure asymptotic freeness for the general cases of (n, k). Let us denote by
$\displaystyle \mu(d_1,\cdots,d_n)=\min_{1\leq j\leq n}d_j\geq 2$.
Theorem 2.5 Let n be a fixed natural number. If
$\displaystyle \lim \mu(d_1,\cdots,d_n) =\infty$ and
$\displaystyle \lim \frac{p}{d_{1} \cdots d_{n}} = c\in (0,\infty)$, then the family
$\displaystyle \{W^\sigma \}_{\sigma \in \{0, 1 \}^{n}}$ of the partial transposes is almost surely asymptotically free.
Proof. As the first step, let us prove asymptotic freeness by showing that all the mixed cumulants vanish as in [Reference Mingo and Popa10]. Recall that the joint moment
$(\mathbb{E}\otimes \text{tr})(W^{\epsilon_1}W^{\epsilon_2}\cdots W^{\epsilon_m})$ is given by

for any
$\mathbb{Z}_2$-valued m × n matrix
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ by Theorem 2.2. By Lemma 2.4, we have
$\displaystyle \prod_{j=1}^n d_j^{f_{j}(\epsilon,\sigma)}\leq \mu(\textbf{d})^{\sum_{j=1}^n f_j(\epsilon,\sigma)}$ and

where NC(m) is the set of all non-crossing partitions of
$[m]$ and π is the associated partition of
$\sigma\in S_m$. For each partition π of
$[m]$, let us denote by
$S(m,\pi)$ the set of all permutations
$\sigma\in S_m$ whose associated partition is π. Then (2.30) implies

Let
$V_1,V_2,\cdots,V_r$ be the disjoint block decomposition of
$\pi\in NC(m)$, and write

for any subset
$T=\left\{t_1,t_2,\cdots,t_l\right\}\subseteq [m]$ with
$t_1 \lt t_2 \lt \cdots \lt t_l$. Then (2.31) can be written as

A crucial step here is to note that

coincides with namely the free cumulant

Thus, the above (2.32) tells us that all mixed cumulants vanish, and this fact allows us to conclude that the given family
$\left\{W^{\sigma}\right\}$ is asymptotically free by [Reference Mingo and Speicher12, Theorem 16] or [Reference Nica and Speicher14, Theorem 11.20].
Now, our second step is to prove

to establish almost sure asymptotic freeness. Note that the following identity

is a direct consequence from Theorem 2.3 (1). Indeed, for any
$\sigma \in S_{2m}$ such that
$\sigma([m])=[m]$ and
$j\in[n]$, we have

and
$f_{2,j}(\epsilon,\sigma)=f_{1,j}(\epsilon,c_1^{-1}\sigma c_1)+f_{1,j}(\epsilon,c_2^{-1}\sigma c_2)$ by Theorem 2.3 (1). Here,
$c_{1}:[m]\rightarrow[m]$ is the identity map and
$c_{2}:[m]\rightarrow[2m]\backslash[m]$ is given by
$c_{2}(i)=m+i$. Furthermore, since
$\{\sigma \in S_{2m}:\sigma([m])=[m] \}$ is naturally identified with
$S_{m} \times S_{m}$ via
$\sigma \mapsto (c_{1}^{-1} \sigma c_{1}, c_{2}^{-1} \sigma c_{2})$, we can see that



Then Theorem 2.2 and Theorem 2.3 (2) tell us that

with
$f_{2,j}(\epsilon,\sigma)\leq -2$ for all
$j\in [n]$. Finally, since
$\frac{p}{d_1d_2\cdots d_n}$ has a uniform upper bound M > 1 from the assumption, we can conclude that

2.3. Proof of Theorem 2.3
Let
$\sigma\in S_{km}$ and let
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ be a
$\mathbb{Z}_2$-valued m × n matrix. Let us begin with a proof of the first part of Theorem 2.3.
Theorem 2.3 (1) Let
$\sigma \in S_{km}$ with
$k\geq 2$ and suppose that there exist non-empty disjoint subsets
$C_1\in \langle\mathcal{A}_k\rangle$ and
$C_2\in \langle\mathcal{A}_k\rangle$ such that
$\sigma(C_1)=C_1$,
$\sigma(C_2)=C_2$ and
$[km]=C_1\cup C_2$. Let
$|C_1|=k_1m$ and
$|C_2|=k_2m$ and consider bijective increasing functions
$c_1 : [k_1m] \rightarrow C_1$ and
$c_2 : [k_2m] \rightarrow C_2$. Then we have


for all
$j\in [n]$. In particular, we have

for all
$j\in [n]$.
Proof. Note that we have


for all
$j\in [n]$ thanks to (2.19), and the given condition
$\sigma(C_1)=C_1$ and
$\sigma(C_2)=C_2$ implies

for both cases i = 1 and i = 2. Thus, we reach the following conclusion




Additionally, the last conclusion is immediate since
$\sharp(\sigma)=\sharp(c_1^{-1}\sigma c_1)+\sharp(c_2^{-1}\sigma c_2)$ and
$k(m+1)=k_1(m+1)+k_2(m+1)$.
From now on, let us suppose that
$k\geq 2$ and there do not exist non-empty disjoint
$C_1\in \langle \mathcal{A}_k\rangle$ and
$C_2\in \langle\mathcal{A}_k\rangle$ such that
$\sigma(C_1)=C_1$,
$\sigma(C_2)=C_2$ and
$[km]=C_1\cup C_2$. In this case, we can construct a sequence of elements
$(x_i)_{i\in [k-1]}$ and a bijective function
$\tau:[k]\rightarrow [k]$ such that
•
$x_i\in A_{\tau(1)}\cup \cdots \cup A_{\tau(i)}$ for all
$i \in [k-1]$,
•
$\sigma(x_{i})\in A_{\tau(i+1)} $ for all
$i \in [k-1]$.
For two disjoint subsets S and T of
$[\pm km]$, let us write
$S\sim_{\phi}T$ if there exists an element
$x\in S$ such that
$\phi(x)\in T$ or an element
$y\in T$ such that
$\phi(y)\in S$ by a bijective function ϕ on
$[\pm km]$. Note that
$\sim_{\phi}$ is a symmetric relation.
Lemma 2.6. From the above notations, there exist subsets
$V_{j,1},V_{j,2},\cdots,V_{j,k}$ of
$[\pm km]$ such that
•
$V_{j,i}$ is one of
$A_{\tau(i)}$ and
$-A_{\tau(i)}$ for all
$i\in [k]$,
•
$\displaystyle \bigcup_{t\in [i]} V_{j,t}\sim V_{j,i+1}$ for all
$i\in [k-1]$ by
$\mathcal{E}^{(k)}_{j} \Delta^{(k)} \sigma \Delta^{(k)} \sigma^{-1} \mathcal{E}^{(k)}_{j}$.
Proof. Let us use the above sequence
$x_1,x_2,\cdots,x_{k-1}$ to construct the subsets
$V_{j,1},V_{j,2},\cdots,V_{j,k}$. Let us start with
$V_{j,1}=A_{\tau(1)}$. Then, the following table of direct calculations tells us how to decide
$V_{j,i+1}$ from
$V_{j,1}, \cdots, V_{j,i}$.

Indeed, if
$x_{i}\in A_{\tau(i)}$ and
$V_{j,i}=A_{\tau(i)}$ (resp.
$V_{j,i}=-A_{\tau(i)}$), then we take
$V_{j,i+1}=A_{\tau(i+1)}$ (resp.
$V_{j,i+1}=-A_{\tau(i+1)}$) in the first or the fourth cases and take
$V_{j,i+1}=-A_{\tau(i+1)}$ (resp.
$V_{j,i+1}=A_{\tau(i+1)}$) in the second or the third cases.
Under the notations above, let us denote by
$W_{j,i}=-V_{j,i}$,
$V_j=\bigcup_{i\in [k]} V_{j,i}$ and
$W_j=\bigcup_{i\in [k]} W_{j,i}$. Then we are ready to prove the second part of Theorem 2.3. Our strategy is to adapt the proof of [Reference Mingo and Popa10, Lemma 4.3] and to divide the general situation into the case where
$V_j\sim W_j$ and the other case where
$V_j\nsim W_j$ by
$\mathcal{E}^{(k)}_{j} \Delta^{(k)} \sigma \Delta^{(k)} \sigma^{-1} \mathcal{E}^{(k)}_{j}$.
Theorem 2.3 (2). Suppose that
$k \ge 2$ and there do not exist non-empty disjoint
$C_1\in \langle \mathcal{A}_k\rangle$ and
$C_2\in \langle\mathcal{A}_k\rangle$ such that
$\sigma(C_1)=C_1$,
$\sigma(C_2)=C_2$ and
$[km]=C_1\cup C_2$. Then we have

for any
$\mathbb{Z}_2$-valued m × n matrices
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$.
Proof. (Case 1:
$V_j\sim W_j$ by
$\mathcal{E}^{(k)}_{j} \Delta^{(k)} \sigma \Delta^{(k)} \sigma^{-1} \mathcal{E}^{(k)}_{j}$) In this case, the subgroup generated by
$\mathcal{E}^{(k)}_{j} \Delta^{(k)} \sigma \Delta^{(k)} \sigma^{-1} \mathcal{E}^{(k)}_{j}$ and
$\Gamma^{(k)} \Delta^{(k)} (\Gamma^{(k)})^{-1} \Delta^{(k)}$ acts on
$[\pm km]$ transitively by Lemma 2.6 and the given assumption, so there exists a non-negative integer g satisfying


See [Reference Mingo and Speicher12] for more details about the existence of g, and the second equality comes from direct calculations. Then we have






where we used
$\mathcal{E}^{(k)}_j \Delta^{(k)}=\Delta^{(k)} \mathcal{E}^{(k)}_j$ at the third equality. Thus, we obtain


(Case 2:
$V_j\nsim W_j$ by
$\mathcal{E}^{(k)}_{j} \Delta^{(k)} \sigma \Delta^{(k)} \sigma^{-1} \mathcal{E}^{(k)}_{j}$) In this case, we have

and the subgroup generated by
$\tau_{1,V_j}=\mathcal{E}^{(k)}_{j} \Delta^{(k)} \sigma \Delta^{(k)} \sigma^{-1} \mathcal{E}^{(k)}_{j}\Bigr|_{V_j}$ and
$\tau_{2,V_j}=\Gamma^{(k)}\Delta^{(k)}(\Gamma^{(k)})^{-1}\Delta^{(k)}\Bigr|_{V_j}$ acts on Vj transitively by Lemma 2.6. As in the case
$V_j\sim W_j$, there exists a non-negative integer g satisfying


implying

On the other side, we define
$\tau_{1,W_{j}}=\mathcal{E}^{(k)}_{j} \Delta^{(k)} \sigma \Delta^{(k)} \sigma^{-1} \mathcal{E}^{(k)}_{j}\Bigr|_{W_j}$ and
$\tau_{2,W_j}=\Gamma^{(k)}\Delta^{(k)}(\Gamma^{(k)})^{-1}\Delta^{(k)}\Bigr|_{W_j}$ similarly. Then there exists a non-negative integer
$g'$ satisfying

Thus, we obtain






which leads us to reach the following conclusion

3. A central limit theorem for partial transposes
In this section, let us consider an arbitrary sequence of finite length

and denote by
$\displaystyle \mu(\mathbf{d})= \min_{j\in [n]} d_{j}\geq 2$ and by
$n=n(\mathbf{d})$ the length of d. A key distinction from Section 2 is that
$n=n(\textbf{d})$ varies depending on d. Let
$p=p(\mathbf{d})$ be a natural number-valued function of d and consider the multipartite Wishart matrices
$W_\textbf{d}=W_{d_{1} \cdots d_{n},p}$. Recall that there exist partial transposes

In this section, the following product

will play a crucial role.
To establish a central limit theorem, for each d, we take a subset
$B_{\mathbf{d}}\subseteq \left\{0,1\right\}^{n({\mathbf{d}})}$ satisfying the following conditions
•
$\displaystyle \lim |B_\textbf{d}|^m \left ( \frac{1}{\mu(\bf d)}+\left | \frac{p(\textbf{d})}{d_1d_2\cdots d_n}-c\right | \right )=0$ for all
$m\in \mathbb{N}$,
•
$\lim |B_\textbf{d}|=\infty$.
Note that the above conditions imply

so we obtain
$\displaystyle \lim \frac{p(\bf d)}{d_1d_2\cdots d_n} =c$ and
$\displaystyle \lim \mu(\textbf{d})=\infty$.
Now, we consider a family
$\left\{a_{\textbf{d},\epsilon}\right\}_{\epsilon \in B_{\textbf{d}}}$ consisting of the centred partial transposes

The main result of this section is the convergence in moments of the following random matrices

to the semi-circular element of mean 0 and variance c. To compute the limit of the m-th moments
$\displaystyle \lim(\mathbb{E}\otimes \text{tr})(s_{\mathbf{d}}^m)$, for any natural number m and an arbitrary function
$x:[m]\rightarrow B_{\mathbf{d}}$, let us denote by

Our basic strategy is to recover the arguments in [Reference Mingo and Speicher12, Section 2.1] with a detailed analysis of asymptotic bounds. Let us begin with the following lemma.
Lemma 3.1. For arbitrary functions
$\epsilon:[m]\rightarrow B_{\mathbf{d}}$ and
$\epsilon':[m]\rightarrow B_{\mathbf{d}}$ such that
$\ker(\epsilon)=\ker(\epsilon')$, we have


Proof. Let us begin with the following formula

and write
$l=|E|$ for simplicity. Consider
$\epsilon|_{E}$ as a function from
$[l] = [|E|]$ into
$B_{\mathbf{d}}$. Then, since


for each
$E \subseteq [m]$ by Theorem 2.2, we have


Note that the given condition
$\ker(\epsilon)=\ker(\epsilon')$ implies

Indeed, any restriction
$\epsilon|_E$ can be considered a function which is of the form

for all
$i\in [l]$ where
$E_z=(\epsilon|_E)^{-1}(z)=\left\{i\in [l]: \epsilon|_{E}(i)=z\right\}$. Then the given assumption
$D(\textbf{d},\epsilon|_{E},\tau)=1$ implies that the partition of τ is non-crossoing by Lemma 2.4 (2), and
$\epsilon|_E$ is constant on each cycle of τ, i.e. each Ez is a union of cycles of τ by Lemma 2.4 (1). Thus, we can write

where
$\displaystyle E_z=\bigcup_{p}\mathfrak{c}_{z,p}$ and
$\mathfrak{c}_{z,p}$’s are the disjoint cycles of τ.
On the other hand,
$\epsilon'|_E$ is also written as

where
$E_w'=(\epsilon'|_E)^{-1}(w)=\left\{i\in [l]: \epsilon'|_{E}(i)=w\right\}$, and the given condition
$\text{ker}(\epsilon)=\text{ker}(\epsilon')$ forces
$E_w'$ to be equal to one of Ez, which is a union of cycles
$\mathfrak{c}_{z,p}$ of τ. Hence,
$\epsilon'|_E$ is a linear combination of characteristic functions on cycles of τ. In other words,
$\epsilon'|_E$ is constant on each cycle of τ. This leads us to conclude that
$D(\textbf{d},\epsilon'|_{E},\tau)=1$ by Lemma 2.4 since the partition of τ is non-crossing.
Let us return to (3.13). Using the above conclusion

we obtain


Furthermore, since




we can conclude that




The above Lemma 3.1 allows us to rely on
$\text{ker}(\epsilon)$ whose structure is categorized in the following four distinct cases:
• (Case A)
$\text{ker}(\epsilon)$ contains a singleton element
• (Case B)
$\text{ker}(\epsilon)$ does not contain a singleton element, and ϵ is not a pairing.
• (Case C)
$\text{ker}(\epsilon)$ is a pairing and there exists
$i\in [m-1]$ such that
$\left\{i,i+1\right\}\in \text{ker}(\epsilon)$
• (Case D)
$\text{ker}(\epsilon)$ is a pairing, and
$\epsilon(i)\neq \epsilon(i+1)$ for all
$i\in [m-1]$.
Our strategy is to prove Lemma 3.3, Lemma 3.4, Lemma 3.5 to cover (Case A), (Case C), (Case D), respectively, and the following technical Lemma 3.2 is an important ingredient to establish Lemma 3.3 and Lemma 3.4.
(1) For any
$\tau\in S_l$, let us denote by
$\tau_1=\tau\circ (l+1)\in S_{l+1}$. Then we have
(3.29)\begin{equation} \sharp(\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E} \vee \tau \Delta \tau^{-1}) = \sharp(\mathcal{E}^{'} \Gamma^{'} \Delta^{'} (\Gamma^{'})^{-1} \mathcal{E}^{'} \vee \tau_1 \Delta^{'} \tau_1^{-1}). \end{equation}
Here,
$\mathcal{E}$, Δ, Γ are acting on
$[\pm l]$, whereas
$\mathcal{E}'$,
$\Delta'$,
$\Gamma'$ are the analogous permutations on
$[\pm (l+1)]$.
(2) For any
$\tau\in S_l$, let us denote by
$\tau_2=\tau\circ (l+1,l+2)\in S_{l+2}$. Then we have
(3.30)\begin{equation} \sharp(\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E} \vee \tau \Delta \tau^{-1})+1 = \sharp(\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime} \vee \tau_2 \Delta^{\prime\prime} \tau_2^{-1}) \end{equation}
if
$\text{sgn}(\mathcal{E}^{\prime\prime}(l+1))=\text{sgn}(\mathcal{E}^{\prime\prime}(l+2))$. Here,
$\mathcal{E}^{\prime\prime},\Delta^{\prime\prime},\Gamma^{\prime\prime}$ are the analogous permutations on
$[\pm (l+2)]$.
Proof. (1) First of all, it is straightforward to check the following facts:
(A)
$\tau_1 \Delta' \tau_1^{-1}=\tau \Delta \tau^{-1} \circ (-(l+1),l+1)$,
(B)
$(-\mathcal{E}(l),\mathcal{E}(1))$ is one of the disjoint cycles in
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}$,
(C)
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}=\mathcal{E}^{'} \Gamma^{'} \Delta^{'} (\Gamma^{'})^{-1} \mathcal{E}^{'}$ on
$[\pm l]\setminus \left\{-\mathcal{E}(l),\mathcal{E}(1)\right\}$,
(D)
$(-\mathcal{E}'(l+1),\mathcal{E}(1))$ and
$(\mathcal{E}'(l+1),-\mathcal{E}(l))$ are disjoint cycles of
$\mathcal{E}' \Gamma' \Delta' (\Gamma')^{-1} \mathcal{E}'$. In particular, we have
(3.31)\begin{equation} \mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}= \mathcal{E}' \Gamma' \Delta' (\Gamma')^{-1} \mathcal{E}'\circ \tau_1 \Delta' \tau_1^{-1} \circ \mathcal{E}' \Gamma' \Delta' (\Gamma')^{-1} \mathcal{E}'. \end{equation}
on
$\left\{-\mathcal{E}(l),\mathcal{E}(1)\right\}$.
Let us suppose that
$\left\{B_1,B_2,\cdots,B_N\right\}$ is the disjoint decomposition of blocks of
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E} \vee \tau \Delta \tau^{-1}$, and we may assume that

since (B) implies
$-\mathcal{E}(l)\sim \mathcal{E}(1)$ by
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}$. Here, we may assume that T is disjoint from
$\left\{\mathcal{E}(1),-\mathcal{E}(l)\right\}$.
On the other hand, we now claim that

is the disjoint decomposition of blocks of
$\mathcal{E}^{'} \Gamma^{'} \Delta^{'} (\Gamma^{'})^{-1} \mathcal{E}^{'} \vee \tau_1 \Delta^{'} \tau_1^{-1}$. Indeed, (A) and (C) explain why
$B_2,\cdots,B_N$ are the disjoint blocks, so the only remaining part is to prove that

is a disjoint block of
$\mathcal{E}^{'} \Gamma^{'} \Delta^{'} (\Gamma^{'})^{-1} \mathcal{E}^{'} \vee \tau_1 \Delta^{'} \tau_1^{-1}$. Firstly, let us prove that any elements
$x,x'$ in T are connected by
$\mathcal{E}^{'} \Gamma^{'} \Delta^{'} (\Gamma^{'})^{-1} \mathcal{E}^{'}$ and
$\tau_1 \Delta^{'} \tau_1^{-1}$. Our assumption provides a sequence
$(x_i)_{i=0}^t$ such that
$x_0=x\in T$,
$x_t=x'\in T$ and
$x_{i}=(\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E})(x_{i-1})$ or
$x_{i}=(\tau \Delta \tau^{-1})(x_{i-1})$ for each
$i\in [t]$. If
$(x_i)_{i=0}^t\subseteq T$, then all the actions of
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}$ and
$\tau \Delta \tau^{-1}$ coincide with the actions of
$\mathcal{E}' \Gamma' \Delta' (\Gamma')^{-1} \mathcal{E}'$ and
$\tau_1 \Delta' \tau_1^{-1}$ by (A) and (C), so the conclusion follows immediately. Now, if we suppose that
$x_{i}=(\tau \Delta \tau^{-1})(x_{i-1})\in \left\{\mathcal{E}(1),-\mathcal{E}(l)\right\}$ at the i-th step, then we may assume that the next two elements are given by

Furthermore, the action
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}$ at the
$(i+1)$-th step can be replaced by

as noted in (D), and the action
$\tau \Delta \tau^{-1}$ coincides with
$\tau_1 \Delta' \tau_1^{-1}$ by (A). Thus, we can conclude that
$x_0=x$ and
$x_t=x'$ are connected by the pairings
$\mathcal{E}' \Gamma' \Delta' (\Gamma')^{-1} \mathcal{E}'$ and
$\tau_1 \Delta' \tau_1^{-1}$. For example, if
$x_i=-\mathcal{E}(l)$, then the original sequence
$\cdots, x_{i-1}, x_i, x_{i+1}, x_{i+2},\cdots$ corresponds to the blue-green-blue paths, and the green path from xi to
$x_{i+1}$ is replaced by the three red paths in the following figure:

Furthermore, (A) and (D) tell us that all elements of
$\left\{\mathcal{E}(1),-\mathcal{E}(l)\right\}\cup \left\{\pm \mathcal{E}'(l+1) \right\}$ are connected by
$\mathcal{E}^{'} \Gamma^{'} \Delta^{'} (\Gamma^{'})^{-1} \mathcal{E}^{'}$ and
$\tau_1 \Delta^{'} \tau_1^{-1}$. Lastly, if we assume there is no element of T connected to
$\left\{\mathcal{E}(1),-\mathcal{E}(l)\right\}\cup \left\{\pm \mathcal{E}'(l+1) \right\}$, then it implies that T is one of the disjoint blocks of
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E} \vee \tau \Delta \tau^{-1}$, which contradicts to the fact that T is a strict subset of B 1.
(2) In this case, it is straightforward to check the following facts
(A)
$\tau_2 \Delta^{\prime\prime} \tau_2^{-1} = \tau \Delta \tau^{-1} \circ (-(l+2),l+1)\circ (-(l+1),l+2)$,
(B)
$(-\mathcal{E}(l),\mathcal{E}(1))$ is one of the disjoint cycles in
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}$,
(C)
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}=\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime}$ on
$[\pm l]\setminus \left\{-\mathcal{E}(l),\mathcal{E}(1)\right\}$,
(D)
$(\mathcal{E}(1),-\mathcal{E}^{\prime\prime}(l+2))$,
$(\mathcal{E}^{\prime\prime}(l+1),-\mathcal{E}(l))$ are cycles of
$\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime}$, and
$(-\mathcal{E}^{\prime\prime}(l+2),\mathcal{E}^{\prime\prime}(l+1))$ is a cycle of
$\tau_2\Delta^{\prime\prime}\tau_2^{-1}$. In particular, we have
(3.37)\begin{equation} \mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}=\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime} \circ \tau_2 \Delta^{\prime\prime} \tau_2^{-1}\circ \mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime} \end{equation}
on
$\left\{\mathcal{E}(1),-\mathcal{E}(l)\right\}$.
As in the proof of (1), let us suppose that
$\left\{B_1,B_2,\cdots,B_N\right\}$ is the disjoint block decomposition of
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E} \vee \tau \Delta \tau^{-1}$, and we may assume that

and T is disjoint from
$\left\{\mathcal{E}(1),-\mathcal{E}(l)\right\}$ since (B) implies
$-\mathcal{E}(l)\sim \mathcal{E}(1)$ by
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E}$.
From now on, we will claim that there exist precisely N + 1 disjoint blocks of
$\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime} \vee \tau_2 \Delta^{\prime\prime} \tau_2^{-1}$. Indeed, (A) and (C) imply that
$B_2,B_3,\cdots,B_N$ are N − 1 disjoint blocks and it is immediate to check that
$\left (\mathcal{E}^{\prime\prime}(l+2),-\mathcal{E}^{\prime\prime}(l+1) \right )$ is a cycle of both
$\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime}$ and
$\tau_2 \Delta^{\prime\prime} \tau_2^{-1}$. Thus, the only remaining part is to prove that all elements in


are connected by
$\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime}$ and
$\tau_2 \Delta^{\prime\prime} \tau_2^{-1}$. Firstly, all elements in T are connected by (A), (C), (D), and all elements in
$\left\{\mathcal{E}(1),-\mathcal{E}^{\prime\prime}(l+2)\right\}\cup \left\{\mathcal{E}^{\prime\prime}(l+1),-\mathcal{E}(l)\right\}$ are also connected by
$\mathcal{E}^{\prime\prime} \Gamma^{\prime\prime} \Delta^{\prime\prime} (\Gamma^{\prime\prime})^{-1} \mathcal{E}^{\prime\prime}$ and
$\tau_2 \Delta^{\prime\prime} \tau_2^{-1}$ thanks to (D) as in the proof of (1). Then, if we assume that there is no element of T connected to

then T should be one of the disjoint blocks of
$\mathcal{E} \Gamma \Delta \Gamma^{-1} \mathcal{E} \vee \tau \Delta \tau^{-1}$. This contradicts the fact that T is a strict subset of B 1.
Now, let us present an estimate of
$(\mathbb{E}\otimes\text{tr})(a_{\textbf{d},m,\epsilon})$ for (Case A).
Lemma 3.3. Let
$\epsilon:[m]\rightarrow B_{\mathbf{d}}$ be a function and suppose that
$\text{ker}(\epsilon)$ contains a singleton set. Then we have

Proof. If m = 1 and
$\epsilon\in B_\textbf{d}$, then

by Theorem 2.2, so the desired inequality (3.43) follows immediately.
From now on, let
$m\geq 2$ and we may assume
$\epsilon(i)\neq \epsilon(m)$ for all
$i\in [m-1]$ thanks to the given assumption and the traciality of
$\mathbb{E}\otimes \text{tr}$. Let us begin with the following formula


and write
$l=|E|$ for simplicity. Note that


for each
$E \subseteq [m-1]$ by Theorem 2.2. Now, let us understand
$\sigma\in S_{l+1}$ as a permutation acting on
$[l+1]\cong E\cup \left\{m\right\}$. Then the image of the map
$\tau\mapsto \tau_1=\tau\circ (l+1)$ consists of the permutations
$\sigma\in S_{l+1}$ satisfying
$\sigma(l+1)=l+1$. Furthermore, Lemma 3.2 provides the following identity



and we obtain



In particular, for
$\sigma\in S_{l+1}$ with
$\sigma(l+1)\neq l+1$, the given assumption
$\left\{m\right\}\in \text{ker}(\epsilon)$ implies that
$\epsilon(t)\neq \epsilon(m)$ for any
$t\in E\subseteq [m-1]$, i.e. there exists
$j\in [n]$ such that
$\epsilon(t)_{j} \ne \epsilon(m)_{j}$. This means that
$[\epsilon|_{E \cup \{m \}}(\cdot)]_j$ is not constant on the cycle containing l + 1, so we should have

for some j and
$D(\textbf{d},\epsilon|_{E \cup \{m \}},\sigma)\leq \mu(\mathbf{d})^{-1}$ by Lemma 2.4 (1). Then, combining all the discussions above with the standard triangle inequality, we obtain



Hence, we reach the following conclusion



An estimate of
$(\mathbb{E}\otimes\text{tr})(a_{\textbf{d},m,\epsilon})$ for (Case C) is provided in the following Lemma.
Lemma 3.4. Let
$\epsilon:[m]\rightarrow B_{\mathbf{d}}$ be a function with
$m\geq 2$.
(1) Let
$m=2$ and suppose that
$\epsilon:[2]\rightarrow B_{\mathbf{d}}$ satisfies
$\epsilon(1)=\epsilon(2)$. Then we have
(3.61)\begin{align} &\left | (\mathbb{E}\otimes \text{tr})(a_{\mathbf{d},2,\epsilon}) - c \right |\leq \left | \frac{p(\mathbf{d})}{d_1\cdots d_n}-c \right |+ \left | \frac{p(\mathbf{d})}{d_1\cdots d_n}-c \right |^2. \end{align}
(2) Let
$m\geq 3$ and suppose that there exists
$i\in [m-1]$ such that
$\left\{i,i+1\right\}\in \text{ker}(\epsilon)$, and let us understand
$\epsilon|_{[m]\setminus \left\{i,i+1\right\}}$ as a function from
$[m-2]$ into
$B_\mathbf{d}$. Then we have
(3.62)\begin{align} & \left | (\mathbb{E}\otimes \text{tr})(a_{\mathbf{d},m,\epsilon}) - c\cdot (\mathbb{E}\otimes \text{tr})(a_{\mathbf{d},m-2,\epsilon|_{[m]\setminus \left\{i,i+1\right\}}}) \right | \end{align}
(3.63)\begin{align} & \leq 2^{m}(1+c)^m (2+c) \left (\frac{1}{\mu(\mathbf{d})}+\left | \frac{p(\mathbf{d})}{d_{1} \cdots d_{n}}-c \right | \right ) m! \sum_{s=0}^m \left ( \frac{p(\mathbf{d})}{d_{1} \cdots d_{n}} \right )^s. \end{align}
Proof. For the case m = 2, note that Theorem 2.2 implies


since
$\epsilon(1)=\epsilon(2)$. Thus, we obtain


implying the desired conclusion.
From now on, let us focus on the cases
$m\geq 3$. As in the proof of Lemma 3.3, we may assume
$\left\{m-1,m\right\}\in \text{ker}(\epsilon)$ using the traciality of
$\mathbb{E}\otimes \text{tr}$ and we have following identity



Let us write
$l=|E|$. Then similar arguments from the proof of Lemma 3.3 give us the following two identities:




Here, both the second sums of (3.74) and (3.76) are dominated by

and the first sum of (3.76) is dominated by
$\displaystyle \frac{m!}{\mu(\mathbf{d})} \sum_{s=0}^{m}\left ( \frac{p(\textbf{d})}{d_{1} \cdots d_{n}} \right )^s $ as in the proof of Lemma 3.3. Thus, it is straightforward to check that


Recall that the image of a function
$v\in S_l\mapsto v_2=v\circ (l+1,l+2)\in S_{l+2}$ consists of the permutations whose one of the disjoint cycles is
$(l+1,l+2)$ and that




by Lemma 3.2. (2). Let us write
$c\notin \rho$ if c is not a disjoint cycle of
$\rho\in S_{l+2}$. Then we have


From the conditions
$(l+2)\notin \rho$ and
$(l+1,l+2)\notin \rho$, there exists
$b_0\in [l]$ such that
$\rho(b_0)=l+1$ or
$\rho(b_0)=l+2$. Note that

from the given assumption, so there exists
$j\in [n]$ such that

This means that
$[\epsilon|_{E \cup \{m-1, m \}}(\cdot)]_j$ is not constant on the cycle containing b 0 in ρ, implying
${D(\textbf{d},\epsilon|_{E \cup \{m-1, m \}},\rho)}\leq \mu(\mathbf{d})^{-1}$ for such ρ by Lemma 2.4 (1). Thus, we obtain


Finally, combining (3.79) and (3.89), we can conclude that



As of the last ingredient to reach the main conclusion, let us present an estimate of
$(\mathbb{E}\otimes\text{tr})(a_{\textbf{d},m,\epsilon})$ for (Case D) in the following lemma.
Lemma 3.5. Let
$\epsilon:[m]\rightarrow B_{\mathbf{d}}$ be a function and suppose that
$\epsilon(i)\neq \epsilon(i+1)$ for all
$i\in [m-1]$. Then we have


Proof. Note that d,
$n(\textbf{d})$,
$p(\textbf{d})$, m, ϵ are fixed , and we have


where
$l=|E|$. Let us consider
$\textbf{d}'=(d_1',\cdots,d_n')$ satisfying
$\displaystyle \lim \mu(\textbf{d}')=\infty$ and
$\displaystyle \lim \frac{p(\textbf{d}')}{d_1'\cdots d_n'}=c$. Note that
$n=n(\textbf{d})$ does not rely on the choice of
$\textbf{d}'$. Thus, we have


by the asymptotic freeness of
$\left\{W_{\textbf{d}'}^{\sigma}\right\}_{\sigma\in \left\{0,1\right\}^n}$ (Theorem 2.5). Thus,


and the standard triangle inequality tells us


Furthermore, the binomial theorem implies

and it is immediate to see that


for all permutations
$\sigma\in S_l$. Thus, we can conclude that




and this implies the desired conclusion



Finally, we are ready to establish a central limit theorem for partial transposes by applying Lemma 3.3, Lemma 3.4, and Lemma 3.5.
Theorem 3.6. Let
$p=p(\mathbf{d})$ and
$n=n(\mathbf{d})$ be
$\mathbb{N}$-valued functions of d, and consider a sequence of subsets
$B_{\mathbf{d}}\subseteq \left \{0,1\right\}^{n(\mathbf{d})}$. If

for all natural numbers m and
$\lim |B_\textbf{d}|=\infty$, then the following random matrices

converge in moments to the semicircular element of the mean 0 and the variance c, i.e. we have

Proof. It is enough to prove that

where
$NC_2(m)$ is the set of all non-crossing pairings on
$[m]$. Note that, for any m and d, we have


For each d and
$\pi\in P(m)$, let us take a representative function
$x_{\mathbf{d},\pi}:[m]\rightarrow B_\mathbf{d}$ satisfying
$\text{ker}(x_{\mathbf{d},\pi})=\pi$. Then we have



by Lemma 3.1, where
$k_{\mathbf{d},\pi}=|B_{\mathbf{d}}|\cdot (|B_{\mathbf{d}}|-1)\cdots (|B_{\mathbf{d}}|-\sharp(\pi)+1)$. Thus, the given condition
$\displaystyle |B_{\mathbf{d}}|^m=o(\mu(\mathbf{d}))$ implies

As the first step, let us prove that

for the following situation

Indeed, Lemma 3.3 provides the following estimate


and the given conditions imply


From now on, it is enough to suppose that the permutation π does not contain a singleton set, implying
$\sharp(\pi) \le \frac{m}{2}$. Furthermore, if we suppose that π is in (Case B), i.e.
$\sharp(\pi) \lt \frac{m}{2}$, then it is straightforward to see that



Here,
$\displaystyle \limsup (\mathbb{E} \otimes \text{tr}) (a_{\mathbf{d},m,x_{\mathbf{d},\pi}} )\leq m!\cdot 2^m\left ( 1+c \right )^m \lt \infty$ is clear from (3.96).
Now, let us focus on the cases where π is a pairing, i.e. all disjoint blocks of π are given by cycles of length 2. In this case, we have
$\sharp(\pi) = \frac{m}{2}$ and the representative function
$x_{\mathbf{d},\pi}:[m]\rightarrow B_{\mathbf{d}}$ should be one of the following two cases:
• (Case C)
$\text{ker}(x_{\mathbf{d},\pi})$ is a pairing, and there exists
$i\in [m-1]$ such that
$\left\{i,i+1\right\}\in \text{ker}(x_{\mathbf{d},\pi})$
• (Case D)
$\text{ker}(x_{\mathbf{d},\pi})$ is a pairing, and
$x_{\mathbf{d},\pi}(i)\neq x_{\mathbf{d},\pi}(i+1)$ for all
$i\in [m-1]$.
If π is in (Case D), i.e.
$\text{ker}(x_{\mathbf{d},\pi})$ is a pairing satisfying

for all
$i\in [m-1]$, then we have




by Lemma 3.5. Now, for the last situation (Case C), there exists
$i_{0}\in [m-1]$ such that
$x_{\mathbf{d},\pi}(i_0)=x_{\mathbf{d},\pi}(i_0+1)$ and Lemma 3.4 implies



Note that the restricted function
$x_{\mathbf{d},\pi}|_{[m]\setminus \left\{i_0,i_0+1\right\}}$ defines a new pairing on
$[m-2]$, which should be in one of (Case C) and (Case D). Thus, we can repeat the above arguments, leading us to conclude that

Now, combining all the above discussions, we obtain


Acknowledgements
The author Sang-Gyun Youn expresses gratitude to Professor James A. Mingo for the discussion at Queen’s University, which was the starting point of this project. The authors were supported by Samsung Science and Technology Foundation under Project Number SSTF-BA2002-01 and by the National Research Foundation of Korea (NRF) grants funded by the Ministry of Science and ICT (MSIT) (No. 2020R1C1C1A01009681, No. RS-2024-004139 and NO. RS-2025-00561391).
Appendix A. Proof of Theorem 2.2
Let us begin with generalizing [Reference Mingo and Popa10, Lemma 3.2] to the multipartite situation. More precisely, let us explain how to write the random variable

as a polynomial of Gaussian variables for arbitrary
$\mathbb{Z}_2$-valued m × n matrices
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$. Note that any
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ can be decomposed to
$\epsilon'_{[n-1]}=(\epsilon_{ij})_{i\in [m],j\in [n-1]}$ and
$\epsilon'_n=(\epsilon_{in})_{i\in [m]}$. Then
$\epsilon'_{[n-1]}$ and
$\epsilon'_n$ define the associated functions

where
$\mathcal{E}_{[n-1]}$ is given by
$ \mathcal{E}_{[n-1]}(j,x) = (j,\mathcal{E}_j(x))$
Notation A.1 We denote by
$A(\epsilon)$ the set of all functions
$\iota:[n-1]\times [\pm m]\rightarrow \cup_{j=1}^{n-1} [d_j]$ satisfying
(1)
$\iota(j,\cdot)\in [d_j]$ for each
$j\in [n-1]$,
(2)
$ \iota=\iota\circ\mathcal{E}_{[n-1]}\left (\text{id}_{n-1}\times \Gamma\Delta\Gamma^{-1}\right ) \mathcal{E}_{[n-1]}$,
and by
$B(\epsilon)$ the set of all functions
$q:[\pm m]\rightarrow [d_{n}]$ satisfying

Using the notations above, we explain how to express

as a polynomial of Gaussian variables in the following Lemma, which directly generalizes [Reference Mingo and Popa10, Lemma 3.2] to the multipartite setting.
Lemma A.2. Let
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ be a
$\mathbb{Z}_2$-valued
$m\times n$ matrix with
$\epsilon_i=(\epsilon_{ij})_{j=1}^{n}\in \left\{0,1\right\}^{n}$ for all
$i\in [m]$, and let
$X_{\epsilon}=\text{tr}(W^{\epsilon_1}W^{\epsilon_2}\cdots W^{\epsilon_m})$. Then we have

Proof. As the first step, in order to focus on an entrywise expression of

let us introduce two functions
$\textbf{k}=\textbf{k}_\textbf{i, j}$ and ηi as follows.
• Let us identify the following set
(A.5)\begin{align} F(n-1)&=[d_{n-1}]\times \cdots \times [d_1]\times [d_1]\times \cdots \times [d_{n-1}] \end{align}
(A.6)\begin{align} &\cong [d_1d_2\cdots d_{n-1}]\times [d_1d_2\cdots d_{n-1}] \end{align}
with the set of all functions
$\textbf{k}:[\pm (n-1)]\rightarrow \cup_{j=1}^{n-1} [d_j]$ satisfying that
$\textbf{k}(\pm t)\in [d_t]$ for all
$t \in [n-1]$. More specifically, each pair
$(\textbf{i},\textbf{j})\in [d_1d_2\cdots d_{n-1}]\times [d_1d_2\cdots d_{n-1}]$ is associated with
(A.7)\begin{align} \textbf{k}&=(\textbf{k}(-(n-1)),\cdots,\textbf{k}(-1),\textbf{k}(1),\cdots,\textbf{k}(n-1)) \end{align}
(A.8)\begin{align} &=(\textbf{j}_{n-1},\cdots,\textbf{j}_{1},\textbf{i}_1,\cdots,\textbf{i}_{n-1}) \in F(n-1). \end{align}
In this case, let us write
$\textbf{k}^+=\textbf{i}$ and
$\textbf{k}^-=\textbf{j}$ as functions from
$[n-1]$ into
$\cup_{j=1}^{n-1}[d_j]$.
• For each
$i\in [m]$, we define
$\eta_{i}:[\pm (n-1)]\rightarrow [\pm (n-1)]$ by
(A.9)\begin{equation} \eta_i(x) = (-1)^{\epsilon_{i|x|}}\cdot x. \end{equation}
Then
$\eta_{i}\circ\eta_{i}$ is the identity function on
$[\pm (n-1)]$, and we have
(A.10)\begin{align} T^{\epsilon_{ij}}(e_{\textbf{k}(j),\textbf{k}(-j)})&=\left\{\begin{array}{ll} e_{\textbf{k}(j),\textbf{k}(-j)}&\text{if }\epsilon_{ij}=0\\ e_{\textbf{k}(-j),\textbf{k}(j)}&\text{if }\epsilon_{ij}=1 \end{array} \right . \end{align}
(A.11)\begin{align}&= e_{(\textbf{k}\circ \eta_i)(j),(\textbf{k}\circ \eta_i)(-j)} , \end{align}
where T is the transpose operator.
Using the notations above, we obtain

by (A.11). Furthermore, since
$\textbf{k}\mapsto \textbf{k}\circ \eta_i$ is a bijective function on
$F(n-1)$ and
$\eta_i\circ \eta_i=\text{id}_{[\pm (n-1)]}$, we have

Thus, the joint moment
$\left( d_{1} \cdots d_{n+1} \right)^{m+1}X_{\epsilon}$ is written as

and we have

To deal with the multiple functions k1, k2, ⋯, km simultaneously, let us define
$K_0:[m]\times [\pm (n-1)]\rightarrow \cup_{j=1}^{n-1}[d_j]$ by

and its natural extension K on
$[\pm m]\times [\pm (n-1)]$ given by

Then it is straightforward to see that (A.15) is given by


where
$\Gamma=(1,2,\cdots, m)\in S_m$. Now, let us define
$F_0(m,n)$ as the set of
$K_0=(\textbf{k}_1,\textbf{k}_2,\cdots,\textbf{k}_m)\in F(n-1)^m$ satisfying the condition

Then the expression (A.14) is simplified to

On the other hand, any
$(\textbf{k}_1,\textbf{k}_2,\cdots,\textbf{k}_m)\in F(n-1)^m$ is associated with a function
$\iota: [n-1]\times [\pm m] \rightarrow \cup_{j=1}^{n-1} [d_j]$ given by

Indeed, the above condition (A.20) is equivalent to that
$\iota\in A(\epsilon)$, i.e.

on
$[n-1]\times [\pm m]$, and the restricted functions
$\iota_y=\iota(\cdot, y)$ and
$\iota_{-y}=\iota(\cdot, -y)$ satisfy


for all
$j\in [n-1]$ and
$y\in [m]$. Thus, combining (A.14) and (A.21), we have

Note that




for any
$y\in [m]$, and that the non-trivial terms of the trace of
$\displaystyle \prod_{y=1}^m T^{\epsilon_{yn}}(G_{\iota_y}G_{\iota_{-y}}^*)$ arise only from the cases where we have

Furthermore, (A.31) is also equivalent to that
$q\in B(\epsilon)$, i.e.
$q=r\circ\mathcal{E}_{n}:[\pm m]\rightarrow [d_{n}]$ satisfies

Finally, combining all the discussions above, we obtain

which leads us to the following conclusion

A non-trivial fact from Lemma A.2 is that Xϵ is a real-valued random variable and, moreover, the explicit expression (A.4) can be applied to compute the following k-th moments

Let k be an arbitrary natural number and let
$\epsilon=(\epsilon_{ij})_{i\in [m],j\in [n]}$ be a
$\mathbb{Z}_2$-valued m × n matrix with
$\epsilon_i=(\epsilon_{ij})_{j\in [n]} \in \left\{0,1\right\}^n$. Let us apply the explicit formula (A.4) of

to find a suitable expression of the k-th powers

where we need to consider the following multiple choices of functions:

To deal with all these functions simultaneously, let us introduce the following three multivariate functions:
•
$\textbf{I}: [k]\times [\pm m]\rightarrow [d_1d_2\cdots d_{n-1}]$ given by
(A.39)\begin{equation} \textbf{I}(s,s')=(I_1(s,s'),I_2(s,s'),\cdots,I_{n-1}(s,s')). \end{equation}
and each
$I_j:[k]\times [\pm m]\rightarrow [d_j]$ is given by
(A.40)\begin{equation} I_j(s,s')=\iota^{(s)}(j,s')\in [d_j]. \end{equation}
Then all
$\iota^{(1)}$,
$\iota^{(2)}$, ⋯,
$\iota^{(k)}$ are in
$A(\epsilon)$ if and only if
(A.41)\begin{equation} I_{j}=I_{j}\circ \mathcal{E}^{(k)}_{j} \Gamma^{(k)} \Delta^{(k)} (\Gamma^{(k)})^{-1} \mathcal{E}^{(k)}_{j} \end{equation}
for all
$j\in[n-1]$. We denote by
$A(\epsilon,k)$ the set of such functions I.
•
$\textbf{Q}:[k]\times [\pm m]\rightarrow [d_{n}]$ given by
(A.42)\begin{equation} \textbf{Q}(s,s')=q^{(s)}(s'). \end{equation}
Then all
$q^{(1)}$,
$q^{(2)}$, ⋯,
$q^{(k)}$ are in
$B(\epsilon)$ if and only if
(A.43)\begin{equation} \textbf{Q}=\textbf{Q}\circ \mathcal{E}^{(k)}_{n} \Gamma^{(k)} \Delta^{(k)} (\Gamma^{(k)})^{-1} \mathcal{E}^{(k)}_{n}. \end{equation}
Let us denote by
$B(\epsilon,k)$ the set of such functions Q.
•
$\textbf{T}:[k]\times [m]\rightarrow [p]$ given by
(A.44)\begin{equation} \textbf{T}(s,s')=t^{(s)}(s'). \end{equation}
Note that, for each
$(s,s')\in [k]\times [\pm m]$, the above
$\textbf{I}(s,s')$ can be considered a function from
$[n-1]$ into
$\cup_{j=1}^{n-1}[d_j]$ satisfying

Then all our discussions are summarized in the following form:


Now, let us present a proof of Theorem 2.2 using the above notations. Our arguments are analogous to the proof of [Reference Mingo and Popa10, Theorem 3.7].
Proof of Theorem 2.2
By (A.46), we have


For any
$\mathbf{I}\in A(\epsilon,k)$,
$\mathbf{Q}\in B(\epsilon,k)$, and
$\mathbf{T}:[k]\times[m]\rightarrow[p]$, let us write


for all
$(s,s')\in [k]\times [m]\cong [km]$ to pursue simplicity. Then we have


where the second equality comes from the Wick formula. Let us denote by
$C(\sigma)$ the set of all triples
$(\mathbf{I},\mathbf{Q},\mathbf{T})$ satisfying
$\beta=\alpha\circ \sigma$ for
$\sigma\in S_{km}$. Then we have

Using the natural identification
$[\pm km]\cong [k]\times[\pm m]$, we can regard maps from
$[k]\times[\pm m]$ as maps from
$[\pm km]$. Also bijections on
$[k]\times[\pm m]$ can be regarded as permutations on
$[\pm km]$. Then
$(\mathbf{I}, \mathbf{Q}, \mathbf{T})\in C(\sigma)$ if and only if the following conditions hold:
(A)
$I_{j}=I_{j}\circ\sigma\Delta^{(k)}\sigma^{-1}$ for all
$j\in [n-1]$,
(B)
$\mathbf{Q}=\mathbf{Q}\circ\sigma\Delta^{(k)}\sigma^{-1}$.
(C)
$\mathbf{T}=\mathbf{T}\circ\sigma$.
Since the conditions
$\mathbf{I}\in A(\epsilon,k)$ and
$\mathbf{Q}\in B(\epsilon,k)$ should be taken into account,
$|C(\sigma)|$ is equal to the number of triples
$(\mathbf{I}, \mathbf{Q}, \mathbf{T})$ consisting of general functions
$\textbf{I}:[k]\times [\pm m]\rightarrow [d_1d_2\cdots d_{n-1}]$,
$\textbf{Q}:[k]\times [\pm m]\rightarrow [d_n]$,
$\textbf{T}:[k]\times [m]\rightarrow [p]$ satisfying
(A’)
$I_{j} =I_{j}\circ\mathcal{E}^{(k)}_{j} \Gamma^{(k)} \Delta^{(k)} (\Gamma^{(k)})^{-1} \mathcal{E}^{(k)}_{j}=I_{j}\circ\sigma\Delta^{(k)}\sigma^{-1}$ for all
$j\in [n-1]$,
(B’)
$\mathbf{Q} =\mathbf{Q}\circ\mathcal{E}^{(k)}_{n} \Gamma^{(k)} \Delta^{(k)} (\Gamma^{(k)})^{-1} \mathcal{E}^{(k)}_{n}=\mathbf{Q}\circ\sigma\Delta^{(k)}\sigma^{-1}$.
(C’)
$\mathbf{T}=\mathbf{T}\circ\sigma$.
Thus the number of such triples
$(\mathbf{I}, \mathbf{Q}, \mathbf{T})$ is given by

which leads us to the following identity


This implies the following desired conclusion
