pxlogpx

C. Z

Entanglement fidelity and entropy exchange.

Sending entanglement through noisy quantum channels, PHYSICAL REVIEW A VOLUME 54, NUMBER 4 OCTOBER 1996.

Two intrinsic quantities for a given quantum channel is defined in this paper.

Gas station in a circle

There is a highway taking the shape of a circle, where N gas stations are randomly distributed around the highway. The total amount of gas is exactly enough for a car to drive around the circle once. Prove there is always one station where a car with empty gas tank can depart from and then run the circle?

Advertisement

Information

I would like to understand the concept of information in terms of a system, which could be classical or quantum. When a system is unknown to you, it contains some information of course. It is easy to be confused about the meaning of a system known to you. This confusion is actually coming from mixture as a state, e.g. a mixed state defined by a density operator, we actually know this mixed state, which in some sense means this system is known to us. But we actually don’t say this system contains no information to us, e.g. the Von Neumann entropy is not zero. The problem is coming the definition of a state. If we believe that a apparent physical real state is always pure, e.g. spin always shows definite value after measurement, the information of a mixed state should be understand as the Von Neumann entropy difference after the mixed is collapsed to a pure state after a measurement.

Group and Symplectic geometry, FT analysis

About the time and frequency domain analysis.

A group is a set, which satisfy the following conditions,
1, Closed under group multiplication.
2, A(BC)=(AB)C
3, Unit element
3, Inverse element exits for all group elements.

Permutation group P_n
Theorem 1: Group elements will reorder in the group table.
Lemma 1: A function \sum_{A\in G}f(A)=\sum_{A\in G}f(AB),\forall B\in G

A group with prime rank does not have subgroup, except trivial subgroup. All prime rank group is a cyclic group.

Schur lemma:

SU(2) and SO(3), O(3) group


Symplectic space E: is a vector space with a symplectic form w (skew product). (E,w)

A standard symplectic space (R_z^{2n},\sigma(z^\prime,z)=z^{\prime T}\Omega z)

Symplectic, line width and decay rate

In optomechanics, people are talking about this decay rate and line width. Intuitively, it is related to the energy-time uncertainty \Delta E\Delta t\ge \hbar/2, which means the energy width must be \Delta E\sim \frac{\hbar}{2\Delta t}\sim \kappa.

Now let’s suppose we have a state which decays according to |\psi(t)|^2\propto \exp(-\kappa t). Thus the state will looks like \psi(t)\propto \exp(-\frac{i}{\hbar}E_i t)\exp(-\kappa t/2). Now we get its Fourier transform
\psi(\omega)\propto\int_0^\infty\psi(t)\exp(i \omega t)dt\\ \propto\int_0^\infty\exp(-i\omega_i t+i\omega t-\kappa t/2)dt \\ \propto\frac{i}{\omega-\omega_i+i\kappa/2},
which gives the probability of frequency distribution
I(\omega)\propto\frac{1}{(\omega-\omega_i)^2+\kappa^2/4}. It is the familiar Lorentz form. One see the decay rate \kappa determines width of that peak.

Classically, the Poisson brakes is preserved through canonical transformation. The translation matrix is typically a Jacobian matrix, which is simplectic, J=S.J.S^T. Now if we have a symplectic transfer S:q=(x,p)\rightarrow Q=(X,P), say we have [q_i,q_j]=J_{ij}, prove [Q_i,Q_j]=J_{ij}. So
[Q_i,Q_j]=[(Sq)_i, (Sq)_j]=\sum_m\sum_n S_{mi}S_{nj}[q_m,q_n]\\ =\sum_m\sum_n S_{mi}S_{nj}J_{mn}=(S.J.S^T)_{ij}=J_{ij}

Close a graph with one line

This problem is related the Euler path. Each wall can only be passed once, which means that there are three rooms having odd degree. The puzzle can not be solved.

Euler path: For a closed graph, the NS condition for drawing a path with a one line is that there should be zero or two odd-vertex. Zero old vertex gives a Euler circle.

eg. the seven bridges of Koenigsberg, the five room puzzle, the utility puzzle …

Tensor products of Matrices

1), The image of the tensor product is the tensor product of the image;
\text{Suppose we have linear maps between vector spaces }T:V_1\rightarrow W_1 \text{ and } R:V_2\rightarrow W_2, \text{ then there is a linear maps } T\otimes R:V_1\otimes V_2\rightarrow W_1\otimes W_2, \text{ defined by }(T\otimes R)(v_1\otimes v_2)\rightarrow T(v_1)\otimes R(v_2), \forall v_1\in V_1,v_2\in V_2. The same definition can be extended to Hilbert spaces.

2), Now let’s consider linear maps between Hilbert spaces T:\mathscr{H}\rightarrow\mathscr{H}, R:C^n\rightarrow C^n, \text{ then we have }T\otimes R:\mathscr{H}\otimes C^n\rightarrow\mathscr{H}\otimes C^n. \text{ We want to ask the following questions,}
a), What is the natural identification of a typical element of \mathscr{H}\otimes C^n?
If we take the canonical orthonormal basis \{e_1,e_2,...,e_n\}\text{ for }C^n. \text{ then every vector }u\in\mathscr{H}\otimes C^n has a unique representation given by u=\sum_{i=1}^n h_i\otimes e_i, \text{ where }h_i\in\mathscr{H}. One can show that ||u||^2=\sum_{i=1}^n ||h_i||^2=||(h_1,...,h_n)||^2, which indicates the following isomorphism \mathscr{H}\otimes C^n\simeq\mathscr{H}\otimes...\otimes\mathscr{H}=(\oplus^n_1\mathscr{H}) with a natural identification \sum_{i=1}^n h_i\otimes e_i\simeq (h_1,...,h_n)^t.

 
b), What is the identification of a linear map in \mathscr{L}(\mathscr{H}\otimes C^n)?
Given A_{ij}\in\mathscr{L(\mathscr{H})} \text{ for }1\le i,j\le n, we can consider A=(A_{ij})\in M_n(\mathscr{H}) as an operator defined by
A(h_1,...,h_n)^t=(\sum_{j=1}^n A_{1j}h_j,...\sum_{j=1}^n A_{nj}h_j)^t\in\oplus_1^n\mathscr{H}, therefore we have M_n(\mathscr{L}(\mathscr{H}))\rightarrow\mathscr{L}(\mathscr{H}\otimes C^n) in a natural way.
In reverse, every linear map on \mathscr{H}\otimes C^n has such a matrix representation. The identification \mathscr{L}(\mathscr{H}\otimes C^n)\simeq M_n(\mathscr{L}(\mathscr{H})) \text{ via } A\simeq(A_{ij}).

Proof: A map defined by transpose is positive, but not 2-positive.

\forall \rho\in\mathscr{L}(\mathscr{H}) defined in p-dimensional Hilbert space. The map defined by transpose \Phi :\rho\rightarrow\rho^T is positive but not 2-positive.

Proof:
1), Show \Phi is positive. We have to show, for X=(x_{ij})\in M_p,X\ge 0, the X^t=(x_{ji})\ge 0. Let \forall \lambda \in C^p, so that
\left<\lambda|X^t|\lambda \right>=\sum_{i,j=1}^p\lambda_i^\ast x_{ij}^t\lambda_j=\sum_{i,j=1}^p\lambda_i^\ast x_{ji}\lambda_j=\sum_{i,j=1}^p\lambda_j^\ast x_{ij}\lambda_i=\left<\lambda^\ast|X|\lambda^\ast \right>\ge 0.

2), Show that \Phi is not 2-positive. It can be proved if one is able to find one element E\in\mathscr{L}(\mathscr{H}\otimes C^2)\simeq M_2(\mathscr{L}(\mathscr{H})) that is positive, but \Phi^2(E) is not positive.

Actually if we choose an E=\begin{bmatrix} E_{11}&E_{12} \\ E_{21}&E_{22} \end{bmatrix} \in\mathscr{L}(\mathscr{H}\otimes C^2), where E_{ij} is the canonical basis for \mathscr{L}(\mathscr{H}). It is easy to see that \forall (h_1,h_2)^t\in\mathscr{H}\otimes C^2, \left<\begin{bmatrix}h_1^\ast\\h_2^\ast\end{bmatrix},E\begin{bmatrix}h_1\\h_2\end{bmatrix}\right> \ge0. However, \Phi^2(E) is not positive since it has negative eigenvalue. See \Phi^2(E)\begin{bmatrix} e_2\\-e_1\end{bmatrix}=-\begin{bmatrix}e_2\\-e_1\end{bmatrix}

Distinguishing states

Definition: A set of states \{\left|\psi_i\right>\} defined on a Hilbert space H_s is called distinguishable if there exists a measurement system \{M_i\}, such that ||M_i\left|\psi_i\right>||=\delta_{ij} for any i and j.

Covariance

Covariance is defined, for two random variables, as the formula
Con(X,Y)=E[(X-E(X))(Y-E(Y))]=E(XY)-E(X)E(Y)

If we have quantum variable, the definition becomes
V_{XY}=Con(X,Y)=\frac{1}{2}E[\{X-E(X),Y-E(Y)\}]. If we have a set of random variables, say x=\{x_1,x_2...x_n\}, a covariance matrix can be defined, V=(V_{ij}), where V_{ij}=\frac{1}{2}E[\{\hat{x}_i-E(x_i),\hat{x}_j-E(x_j)\}]

Now the interesting thing is that the covariance matrix is positive.

Proof:
Since V=\frac{1}{2}E[(x-E(x))\cdot (x-E(x))+(x-E(x))\cdot (x-E(x))]

Given any column vector u, we have
u^TVu=\frac{1}{2}E[u^T(x-E(x))\cdot (x-E(x))u+u^T(x-E(x))\cdot (x-E(x))u]\\ =\frac{1}{2}E[2A^2]\ge 0 where A=(x-E(x))u.
Since u is any vector, V must be positive.