Free Essay

Hyper Geomatric

In: Computers and Technology

Submitted By usmankhalid
Words 4950
Pages 20
A LINEAR-TIME ALGORITHM FOR EVALUATING SERIES OF SCHUR FUNCTIONS

CY CHAN, VESSELIN DRENSKY, ALAN EDELMAN, AND PLAMEN KOEV

Abstract. We present a new algorithm for computing all Schur functions sλ(x1, x2, . . . , xn) for all partitions λ of integers not exceeding N in time O(n2K ), where K ≡ #{λ| |λ| ≤ N} N N is the number of those partitions. In particular, this algorithm has optimal complexity for evaluating truncated series of Schur functions such as the hypergeometric function of a matrix argument.

1. Introduction We present a new highly efficient algorithm for computing the finite truncation (for k ≤ N) of the hypergeometric function of a matrix argument X in the complex case: X XN 1 (a1)κ · · · (ap)κ (1) pFq(a , . . . , a1 p; b , . . . , b1 q; X) ≡ · · sκ(x , x , . . . , x1 2 n), Hκ (b1)κ · · · (bq)κ k=0 κ`k where Y (2) (a)κ ≡ (a − i + j) (i,j)∈κ Q is the generalized Pochammer symbol, sκ is the Schur function [12], Hκ ≡ (i,j)∈κ hκ(i, j) is 0 the product of all hook lengths hκ(i, j) ≡ κi + κj − i − j + 1 of κ, and x , x , . . . , x1 2 n are the eigenvalues of X. The efficient evaluation of (1) is a central research problem in multivariate statistical analysis [15] with a wealth of applications in wireless communications [1, 6, 7, 10, 13, 14, 16, 17, 21] as well as signal processing [20]. The challenge in evaluating (1) involves the evaluation of the Schur function sκ and has proven extremely challenging primarily since, as a multivariate symmetric polynomial, it has

|κ| exponential number of terms—O n [4, 8, 11]. Currently, the best algorithm for evaluating (1) is mhg from [11] whose complexity is 2 O(nKN), 2000 Mathematics Subject Classification. Primary 05E05. Secondary 65F50; 65T50. Key words and phrases. Schur functions, Fast Fourier Transforms. The research of the second author was partially supported by Grant MI-1503/2005 of the Bulgarian National Science Fund. The research of the third and fourth authors was supported by NSF Grant DMS–0608306. 1

where KN ≡ #{κ| |κ| ≤ N} is the number of terms in (1). √ π 2N/3 An estimate of KN is Ramanujan’s asymptotic formula [9, p. 116] KN ∼ O e , which is subexponential in N. In this paper, we present a new algorithm for computing (1) whose complexity is only 2 O(n KN), 2 i.e., it takes only O(n ) per term instead of the O(nKN) cost per term of mhg. To achieve that complexity, we follow the idea in [11]: The recursive formula [12] X |λ|−|µ| (3) sλ(x , x , . . . , x1 2 n) = sµ(x , x , . . . , x1 2 n−1)xn µ allows each Schur function in (1) to be computed at a cost not exceeding O(nKM). The summation in (3) is over all partitions µ = (µ , . . . , µ1 n−1) such that λ/µ is a horizontal strip, i.e., (4) λ1 ≥ µ1 ≥ λ2 ≥ µ2 ≥ · · · ≥ λn−1 ≥ µn−1 ≥ λn. In this paper we improve on the result of [11] by observing that (3) represents a vector- matrix multiplication (n) (n−1) (5) s = s · Zn(xn) (i) where s is an (appropriately indexed) row-vector of all Schur functions sκ(x , x , . . . , x1 2 i), |λ|−|µ| |κ| ≤ N and Zn(xn) ≡ εµ,λxn is a matrix with entries indexed with the pairs of partitions (µ, λ), with εµ,λ = 1 if λ/µ is a horizontal strip and 0 otherwise. 2 2 Since the matrix Zn is dense, (5) costs O(KM), explaining the O(nKM) complexity of mhg [11]. The key contribution of this paper is to recognize and exploit the structure of Zn to 2 2 perform the multiplication (5) in linear O(n KM) time instead of quadratic O(nKM) time. This work was inspired by the idea of Cookey and Tukey [3] (and later generalized [2, 5, 18, 19]) that a matrix-vector multiplication by the character table of the cyclic group √ (i−1)(j−1)2π −1 of size n (i.e., by the Vandermonde matrix V ≡ e n ) can be performed in 2 O(nlog n) instead of O(n ) time by exploiting the structure of the cyclic group to decompose V recursively into a product of simpler structured matrices.

2. Theoretical grounds

In this section we fix also the positive integer N. For a fixed k = 1, . . . , n, we extend the set of all partitions λ = (λ , . . . , λ1 k) with |λ| ≤ N, and consider the set Pk of partitions λ satisfying the conditions (6) λ1 − λ2 ≤ N, λ2 − λ3 ≤ N, . . . , λk−1 − λk ≤ N, λk ≤ N. k Clearly, the number of the λs from the class Pk is (N + 1) . We order λ ∈ Pk in the reverse lexicographic order with respect to the k-tuple (λ1 − λ , . . . , λ2 k−1 − λk, λk) and assume that 2

λ < ν, λ, ν ∈ Pk, if and only if λk < νk or, when λk = νk, λl−1 − λl = νl−1 − νl for l = p + 1, . . . , k and λp−1 − λp < νp−1 − νp for some p. We build inductively the row vectors Fk(x , . . . , x1 k), k = 1, . . . , n, (7) Fk(x , . . . , x1 k) = (fλ(x , . . . , x1 k) | λ ∈ Pk), where the λs are ordered with respect to the above order. We define 2 N F1(x1) = (1, s(1)(x1), s(2)(x1), . . . , s(N)(x1)) = (1, x , x , . . . , x1 1 1 ), and, for k > 1, (8) Fk(x , . . . , x1 k) = Fk−1(x , . . . , x1 k−1)Yk(xk), where

|λ/µ| (9) Yk(xk) = εµ,λxk , λ ∈ Pk, µ ∈ Pk−1,

and εµ,λ = 1 if λ/µ is a horizontal strip and εµ,λ = 0 otherwise. (We denote the matrix by Yk, celebrating Young because (5) expresses the Young rule which is a partial case of the Littlewood-Richardson rule for calculating the product of two Schur functions.) Lemma 2.1. If λ1 ≤ N for λ ∈ Pk, then (10) fλ(x , . . . , x1 k) = sλ(x , . . . , x1 k). Proof. If we delete the columns and the rows of the matrix Yk(xk) from (9) which correspond, respectively, to partitions λ and µ with |λ| ≥ N and |µ| ≥ N, we shall obtain the matrix Zk(xk) from (5). Since F1(x1) = S1(x1), the proof is completed by easy induction on k. Remark 2.2. Once we have an algorithm for a fast multiplication by Yk we will use it to multiply by only those elements of Fk−1 corresponding to partitions of size not exceeding N, which by the preceding lemma are exactly the Schur functions we want. Therefore even k k−1 2 though the size of Yk is exponential ((N + 1) × (N + 1) ) it will only cost O(n KN) to multiply by Yk. In what follows we denote by Im the identity m × m matrix and by Eij the matrix units with 1 at (i, j)-position and zeros at the other places. The size of Eij will be clear from the context. If P, Q are two matrices, then we denote by P ⊗ Q their Kronecker product. By definition, if P = (pij) is an l × m matrix, then P ⊗ Q is an l × m block matrix with the block pijQ at (i, j)-position. In particular, Im ⊗ Q is a diagonal block matrix with m copies of Q on the diagonal. We fix the (N + 1) × (N + 1) matrix   0 1 0 . . . 0  0 0 1 . . . 0     .. .. .. .. ..  (11) A = E12 + E23 + · · · + EN,N+1 = . . . . . ,     0 0 0 . . . 1 0 0 0 . . . 0 3

and define the matrices   0 0 . . . 0 0  1 0 . . . 0 0    T  .. .. .. .. ..  (12) B2 = A = . . . . . ,     0 0 . . . 0 0 0 0 . . . 1 0 N N (13) C2(x2) = IN+1 + x A2 + · · · + x2 A −1 = (IN+1 − x A2 )   N 1 x2 . . . . . . x2  .. N−1   0 1 . x2   . .  =  . .. .. .. .  ,  . . . . .   0 0 . . . 1 x  2 0 0 . . . 1 (14) K2(x2) = IN+1 ⊗ C2(x2)   C2(x2) . . . 0  .. .. ..  = . . . . 0 . . . C2(x2)

2 Finally, we consider the (N + 1) × (N + 1) matrix

N N (15) Q2(x2) = IN+1 | x B2 2 | . . . | x2 B2 with entries indexed by pairs (µ, λ), where µ = (µ1) ∈ P1, λ = (λ , λ1 2) ∈ P2.

Lemma 2.3. The following equation holds

(16) Y2(x2) = Q2(x2)K2(x2). Proof. The matrix Y2(x2) consists of N + 1 blocks of size (n + 1) × (N + 1). The entry (p, q) of the rth block, p, q, r = 0, 1, . . . , N, is indexed with the pair of partitions (µ, λ), where q+2r−p µ = (p), λ = (q + r, r) and is equal to x2 if r ≤ p ≤ q + r and to 0 otherwise. On the other hand, the corresponding entry of the matrix Q2(x2)K2(x2) is equal to the (p, q)-entry r r of the matrix x B2 2C2(x2). The equations (11), (12) and (13) give that

r B2 = E1+r,1 + E2+r,2 + · · · + EN+1,N+1−r,

r r (B2C2(x2))pq = 0 if p < r and (B2C2(x2))pq is equal to the (p − r, q)-entry (C2(x2))p−r,q of r q−(p−r) r C2(x2). Hence (B2C2(x2))pq = x2 if q ≥ p − r and (B2C2(x2))pq = 0 if q < p − r. In this way, all entries of Y2(x2) and Q2(x2)K2(x2) coincide and this completes the proof. 4

Now, for n ≥ 3 we define inductively the matrices

N N N (17) Un(xn) = I(N+1)n−1 + xn(A ⊗ Bn−1) + · · · + xn (A ⊗ Bn−1) −1 = I(N+1)n−1 − xn(A ⊗ Bn−1) , (18) Vn(xn) = Kn−1(xn) = IN+1 ⊗ Cn−1(xn), (19) Cn(xn) = Un(xn)Vn(xn), (20) Kn(xn) = IN+1 ⊗ Cn(xn), (21) Bn = Bn−1 ⊗ IN+1 = B2 ⊗ I(N+1)n−2,

N N (22) Qn(xn) = I(N+1)n−1 | xnBn | . . . | xn Bn . The following theorem generalizes Lemma 2.3 for any n ≥ 2. Theorem 2.4. The following equation holds for any n ≥ 2 (23) Yn(xn) = Qn(xn)Kn(xn).

Proof. We mimic the proof of Lemma 2.3. Consider the partitions λ = (a1 + · · · + an, a2 + · · · + an, . . . , an); µ = (b1 + · · · + bn−1, b2 + · · · + bn−1, . . . , bn−1). Then the entry in Yn corresponding to (µ, λ) should equal a1+2a2+···+nan−b1−2b2−···−(n−1)bn−1 xn

if λ/µ is a horizontal strip, i.e., if a1 + · · · + an ≥ b1 + · · · + bn−1 ≥ a2 + · · · + an ≥ · · · ≥ an−1 + an ≥ bn−1 ≥ an,

and 0 otherwise.

N N Since Yn(xn) = Qn(xn)Kn(xn) = I(N+1)n−1 | xnBnCn(xn) | · · · | xn Bn Cn(xn) , the (µ, λ) entry of Yn(xn) is in the (1, an) block of Yn, i.e., an an xn Bn Cn(xn) an an Call this matrix Tn. Since Bn = B2 ⊗ I(N+1)n−2, we have Bn = B2 ⊗ I(N+1)n−2 and

an an Tn = xn Bn Un(xn)Vn(xn) an an = xn Bn Un(xn)(IN+1 ⊗ Cn−1(xn))

an an = xn Bn IN+1 ⊗ I(N+1)n−2 + xnA ⊗ (Bn−1Cn−1(xn))

2 2 2 N N N +xnA ⊗ (Bn−1Cn−1(xn)) + · · · + xNA ⊗ (Bn−1Cn−1(xn))

an an an = xn B2 ⊗ I(N+1)n−2 + xnB2 A ⊗ (Bn−1Cn−1(xn))

2 an 2 2 N an N N +xnB2 A ⊗ (Bn−1Cn−1(xn)) + · · · + xn B2 A ⊗ (Bn−1Cn−1(xn)) . 5

The (µ, λ) entry of Yn is thus in the (bn−1, an−1) block of Tn, which equals the (bn−1, an−1) block of an an−1+an−bn−1 an an+an−1−bn−1 an−1+an−bn−1 xn xn B2 A ⊗ (Bn−1 Cn−1(xn)) , i.e., an−1+2an−bn−1 an−1+an−bn−1 xn Bn−1 Cn−1(xn), if an−1 +an ≥ bn−1 ≥ an and 0 otherwise. The inductive step is thus clear and we continue by a1+2a2+···+nan−b1−2b2−···−(n−1)bn−1 induction to conclude that that the (µ, λ) entry of Yn equals xn if λ/µ is a horizontal strip and 0 otherwise.

3. The Algorithm

3.1. Computing only the desired Schur functions. As mentioned in Remark 2.2, a straightforward implementation of Yn(xn) would yield Schur functions corresponding to the n entire set of partitions Pn. The number of such functions is (N + 1) . However, we only wish to compute the Schur functions over the partitions of size at most N, and Pn contains many more partitions than those. In the algorithm presented in this section, we devise a method that leverages the struc- ture explored thus far, but only to compute the Schur functions on the desired partitions. The way we do this is by keeping track of the indices of the desired partitions within the vectors Fk(x , . . . , x1 k) for k = 1, . . . , n, and only maintaining the Schur functions over those partitions. Whenever we do a multiplication by a matrix, we do only the work necessary to compute the next set of desired Schur functions. The key reason we are able to do so efficiently is that the structured matrix Yk(xk) requires us only to reference partitions that are smaller than the ones being processed during com- putation. Thus, by maintaining all partitions up to a certain size, we guarantee the ability to proceed in computing the solution.

3.2. Pseudocode notation. We will use psedo-code based on MATLAB notation. Data arrays are indexed with parentheses “()” and are 1-indexed. Note that this is different from the indexing of partitions in the set Pn, which will be 0-indexed, as this is more elegant mathematically. Further, the use of the colon “:” in the notation array(index1:index2) indicates taking all values in array between index1 and index2, inclusive.

3.3. Algorithm and analysis.

3.3.1. Helper functions. Let Φ(N, n) be the set of partitions of size at most N that use at most n rows. Let φ(N, n) = |Φ(N, n)|. We define a function computeIndices(N,n) that finds the (0-indexed) indices of Φ(N, n) within the list of partitions in Pn (sorted, as before, in reverse lexicographic order). Note that all indices generated by computeIndices(N,n) n must be less than (N + 1) since that is the number of partitions in Pn.

computeIndices(N, n) 6

partitions ← enumeratePartitions(N,n,0) for m ← 1 to length(partitions) do indices(m) ← partitionToIndex(partitions(m),N) end for Return indices

enumeratePartitions(N,n,min) for m ← min to bN/nc do if n = 1 then Add (m) to partitions else subPartitions ← enumeratePartitions(N − m,n − 1,m) th Add m to the n position of all partitions in subPartitions Add subPartitions to partitions end if end for Return partitions

partitionToIndex(partition,N,n) index ← partition(n) for m ← (n − 1) down to 1 do

index ← index · (N + 1) + partition(m) − partition(m + 1) end for Return index

The function enumeratePartitions enumerates all partitions in the set Φ(N, n) in reverse lexicographic order. It works by stepping through possible values for the last element and recursively enumerating the rest of the partition. To analyze its complexity, we simply observe that a constant amount of work is done per recursion level per partition enumerated. Thus, the running time of enumeratePartitions(N,n,0) is simply O(n · φ(N, n)). The function partitionToIndex takes a partition and returns its index within the sorted set Pn. It simply takes the difference between consecutive elements and interprets the result as an (N + 1)-ary number with the elements increasing in significance from first to last. This function clearly takes O(n) time per partition, so its running time on φ(N, n) partitions is O(n · φ(N, n)). Thus, computeIndices(N,n) also takes O(n · φ(N, n)) time. Note that the output of computeIndices is sorted in ascending order.

3.3.2. Matrix functions. In this section we will describe five matrix functions mulY, mulQ, mulK, mulC, and mulU, which simulate multiplication with the matrices Yn(xn), Qn(xn), Kn(xn), Cn(xn), and Un(xn), respectively. 7

We first present an algorithm mulY that simulates multiplication by Yn(xn) = Qn(xn)Kn(xn), but that only computes the Schur functions corresponding to partitions in Φ(N, n). Suppose th Yn(xn) contains a non-zero element at (i, j). Recall from (9) that this implies the i parti- th tion in Pn−1, call it µ, is a subpartition of the j partition in Pn, call it λ. Thus, |µ| ≤ |λ|, and if λ is in Φ(N, n), then, µ must be in Φ(N, n − 1). From the above argument, we see that the partitions in Φ(N, n) will only depend on par- titions in Φ(N, n − 1), so we only need to simulate a very sparse part of the original Yn(xn) matrix. mulY takes as input the Schur functions over Φ(N, n − 1), the indices of Φ(N, n − 1) in Pn−1 (as computed by computeIndices(N,n − 1)), and xn. mulY outputs the Schur functions over Φ(N, n) (as well as their indices).

mulY(input,inputIndices,x,n,N) (output,outputIndices) ← mulQ(input,inputIndices,x,n,N) output ← mulK(output,outputIndices,x,n,N) Return (output,outputIndices)

mulQ(input,inputIndices,x,n,N) n−1 blockLength ← (N + 1) n−2 offset ← (N + 1)

# compute the indices in the output outputIndices ← computeIndices(N,n) H ← constructHashTable(outputIndices)

for m ← 1 to length(outputIndices) do curIndex ← outputIndices(m) if curIndex < blockLength then n−1 # this works because the input and output indices less than (N + 1) match output(m) ← input(m) else if curIndex (mod blockLength) < blockLength − offset then output(m) ← x · output(H(curIndex − blockLength + offset)) end if end for Return (output,outputIndices)

mulY simply runs mulQ and mulK. From (22), we designed mulQ to process its input in blocks n−1 of length (N +1) . The first block is simply copied from the input. Note that since Φ(N, n) is a superset of Φ(N, n − 1), and the partitions are ordered in reverse lexicographic order, the first φ(N, n − 1) entries of outputIndices are exactly equal to inputIndices. For the remaining blocks, we note that each block is a shifted (by offset) version of the previous block multiplied by xn. Since we need to lookup values in output based on their indices,

8

we place all of the indices into a hash table so we can do constant time lookups within the output array. The function constructHashTable(outputIndices) just constructs a hash table that maps each index to its location within the array outputIndices. For a partition λ = (λ , λ , . . . , λ1 2 n) at curIndex in Φ(N, n), we know we will never miss on a hash table lookup in the for loop because the partition located at curIndex † − blockLength + offset is just λ = (λ , λ , . . . , λ1 2 n−1, λn − 1). This fact can be derived † † from the reverse lexicographic ordering. Since |λ | < |λ|, we know that λ is also in Φ(N, n). As argued before, computeIndices(N, n) takes O(n·φ(N, n)) time. Constructing the hash table costs linear time in the number of entries, or O(φ(N, n)). The for loop of mulQ takes time O(φ(N, n)) since hash table look-ups take constant time, therefore the total time to multiply by Qn(xn) using mulQ is O(n · φ(N, n)). The following function simulates multiplying its input by Kn(xn) = IN+1 ⊗ Cn(xn), which n−1 simply multiplies each of its input’s (N + 1) blocks of size (N + 1) by Cn(xn). The values in each block are found by scanning pointers across the indices array, which is sorted in ascending order.

mulK(input,indices,x,n,N) n−1 blockLength ← (N + 1) minPointer ← 1 maxPointer ← 1 for blockIndex ← 0 to b max(indices) / blockLength c do # figure out which indices are in the current block curIndicesMin ← blockIndex · blockLength curIndicesMax ← curIndicesMin + blockLength − 1

# scan pointers forward Scan minPointer to point at the smallest index in indices that is at least curIndicesMin Scan maxPointer to point at the largest index in indices that is at most curIndicesMax

# extract the data for the current block curData ← input(minPointer:maxPointer) # extract the indices for the current block and subtract the block offset curIndices ← indices(minPointer:maxPointer) − curIndicesMin

# run mulC on block of data output(minPointer:maxPointer) ← mulC(curData,curIndices,x,n,N) end for Return output

Since mulK, mulC, and mulU are called recursively on inputs of varying length, we will analyze their complexity in terms of the number of input elements. Let l be the length of the input

9

th to a call to mulK, and let li be the number of input indices in the i block of the for loop. PN Note i=0 li = l. Excluding calls to mulC, multiplying by Kn(xn) using mulK takes O(l) time for pointer scanning and data manipulation. If we let TC(n, l) denote the time taken to multiply a vector of length l by Cn(xn) using mulC, then the time to multiply by Kn(xn) PN is O(l) + i=0 TC(n, li).

We define mulC to simulate multiplying by Cn(xn) = Un(xn)Kn−1(xn) by calling mulU and mulK. Note that K1 is the identity matrix, so we can skip the mulK step when n = 2.

mulC(input,indices,x,n,N) output ← mulU(input,indices,x,n,N) if n > 2 then output ← mulK(output,indices,x,n − 1,N) end if Return output

Finally, we define mulU:

mulU(input,indices,x,n,N) n−2 blockLength ← (N + 1) if n = 2 then offset ← 0 else n−3 offset ← (N + 1) end if H ← constructHashTable(indices) for m ← 1 to length(input) do curIndex ← indices(m) if curIndex ≥ blockLength AND curIndex (mod blockLength) < blockLength − offset then output(m) ← x · output(H(curIndex − blockLength + offset)) + input(m) else output(m) ← input(m) end if end for Return output

The function mulU simulates multiplication by Un(xn) by computing a linear time backsolve −1 using Un(xn) . Suppose we are given vector v and wish to compute w = v · Un(xn) ⇐⇒ −1 v = w · Un(xn) . Let w = (w , w , . . . , w1 2 (N+1)n−1) and v = (v , v , . . . , v1 2 (N+1)n−1), where each n−2 vector is split into (N + 1) blocks of length (N + 1) . Recalling (17), we see that the first

10

block of v is equal to the first block of w, and then within each remaining block, we have the n−2 n−3 relation vi = wi − x · wi−(N+1)n−2+(N+1)n−3 if i is in the first (N + 1) − (N + 1) elements of the block, and vi = wi otherwise. Rearranging terms, we have

x · wi−(N+1)n−2+(N+1)n−3 + vi , if property A is satisfied (24) wi = , vi , otherwise n−2 where property A is satisfied if and only if i is not in the first block, and i (mod (N +1) ) < n−2 n−3 (N + 1) − (N + 1) . The above pseudocode for mulU precisely implements (24), yielding a linear time algorithm for multiplication by Un(xn). Again, we know that the hash table will always hit on lookup because it will always be looking for a partition smaller than the current one being processed in the for loop. If a partition is in the set being processed, all partitions smaller than it must also be in the set. The complexity analysis for mulU is similar to that for mulQ. Assuming the length of the input is l, constructHashTable takes O(l) time, and the for loop takes O(l) time since it does constant work per input element. Thus, the total running time of mulU is O(l), which, combined with the running time for mulK, implies that the running time of mulC is

O(l) , if n = 1 (25) TC(n, l) = PN . O(l) + i=0 TC(n − 1, li) , otherwise Clearly, for each level of recursion (each value of n), a linear amount of work is done in the size of the original input. Thus, solving this recurrence yields: (26) TC(n, l) = O(nl). Finally, this implies that multiplying by Yn(xn) takes O(n · φ(N, n)) + O(φ(N, n)) + O(n · φ(N, n)) = O(n · φ(N, n)) time. To compute the Schur functions for (1) from scratch, note that Φ(N, 1) is just the set of partitions {(0), (1), . . . , (N)}, and the vector of Schur functions over those partitions is just 2 N (1, x , x , . . . , x1 1 1 ), which takes O(N) time to compute. Then, computing the Schur functions over Φ(N, k) for k = {2, 3, . . . , n} requires multiplication by Y2(x2), Y3(x3), . . . , Yn(xn), which takes time at most Xn

2 (27) O (k · φ(N, k)) < O n · φ(N, n) . k=2 √ π 2N/3 As mentioned in the introduction, φ(N, n) ≤ φ(N, N) = KN = O(e ) [9, p. 116], so √ 2 2 π 2N/3 the running time can also be bounded by O(n KN) = O(n e ).

References

1. G.J. Byers and F. Takawira, Spatially and temporally correlated MIMO channels: modeling and capacity analysis, IEEE Transactions on Vehicular Technology 53 (2004), 634–643. 2. Michael Clausen and Ulrich Baum, Fast Fourier transforms for symmetric groups: theory and imple- mentation, Math. Comp. 61 (1993), no. 204, 833–847. MR MR1192969 (94a:20028) 11

3. James W. Cooley and John W. Tukey, An algorithm for the machine calculation of complex Fourier series, Math. Comp. 19 (1965), 297–301. MR MR0178586 (31 #2843) 4. J. Demmel and P. Koev, Accurate and efficient evaluation of Schur and Jack functions, Math. Comp. 75 (2006), 223–239. 5. Persi Diaconis and Daniel Rockmore, Efficient computation of the Fourier transform on finite groups, J. Amer. Math. Soc. 3 (1990), no. 2, 297–332. MR MR1030655 (92g:20024) 6. H. Gao, P.J. Smith, and M.V. Clark, Theoretical reliability of MMSE linear diversity combining in Rayleigh-fading additive interference channels, IEEE Transactions on Communications 46 (1998), 666– 672. 7. A.J. Grant, Performance analysis of transmit beamforming, IEEE Transactions on Communications 53 (2005), 738–744. 8. R. Guti´errez, J. Rodriguez, and A. J. S´aez, Approximation of hypergeometric functions with matricial argument through their development in series of zonal polynomials, Electron. Trans. Numer. Anal. 11 (2000), 121–130. 9. G. H. Hardy, Ramanujan: Twelve lectures on subjects suggested by his life and work., AMS Chelsea, New York, 1999. 10. M. Kang and M.-S. Alouini, Largest eigenvalue of complex Wishart matrices and performance analysis of MIMO MRC systems, IEEE Journal on Selected Areas in Communications 21 (2003), no. 3, 418–431. 11. Plamen Koev and Alan Edelman, The efficient evaluation of the hypergeometric function of a matrix argument, Math. Comp. 75 (2006), no. 254, 833–846 (electronic). MR MR2196994 (2006k:33007) 12. I. G. Macdonald, Symmetric functions and Hall polynomials, Second ed., Oxford University Press, New York, 1995. 13. M.R. McKay and I.B. Collings, Capacity bounds for correlated rician MIMO channels, 2005 IEEE In- ternational Conference on Communications. ICC 2005., vol. 2, 16-20 May 2005, pp. 772–776. 14. , General capacity bounds for spatially correlated Rician MIMO channels, IEEE Transactions on Information Theory 51 (2005), 3121–3145. 15. R. J. Muirhead, Aspects of multivariate statistical theory, John Wiley & Sons Inc., New York, 1982. 16. A. Ozyildirim and Y. Tanik, Outage probability analysis of a CDMA system with antenna arrays in a cor- related Rayleigh environment, IEEE Military Communications Conference Proceedings, 1999. MILCOM 1999, vol. 2, 31 Oct.–3 Nov. 1999, pp. 939–943. 17. , SIR statistics in antenna arrays in the presence of correlated Rayleigh fading, IEEE VTS 50th Vehicular Technology Conference, 1999. VTC 1999 - Fall, vol. 1, 19-22 September 1999, pp. 67–71. 18. Markus P¨uschel, Cooley-Tukey FFT like algorithms for the DCT, Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 2, 2003, pp. 501–504. 19. Markus P¨uschel and Jos´e M. F. Moura, The algebraic approach to the discrete cosine and sine transforms and their fast algorithms, SIAM J. Comput. 32 (2003), no. 5, 1280–1316. MR MR2001274 (2004j:42023) 20. Hyundong Shin and Jae Hong Lee, Capacity of multiple-antenna fading channels: spatial fading corre- lation, double scattering, and keyhole, IEEE Transactions on Information Theory 49 (2003), 2636–2647. 21. V. Smidl and A. Quinn, Fast variational PCA for functional analysis of dynamic image sequences, Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis, 2003. ISPA 2003., vol. 1, 18-20 September 2003, pp. 555–560.

Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, U.S.A. E-mail address: cychan@mit.edu 12

Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria E-mail address: drensky@math.bas.bg

Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, U.S.A. E-mail address: edelman@math.mit.edu

Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, U.S.A. E-mail address: plamen@math.mit.edu

13…...

Similar Documents

Premium Essay

Hyper-Trageted Email Advertising

...the ongoing challenge of attracting new members. To reach prospective new members for its Boston area clubs, the company invested in a campaign with a traditional email list rental vendor. The campaign was a total flop: it was discovered after-the-fact that the email address list was not strictly permission-based and the lack of true opt-in created negative brand perceptions among recipients, campaign performance as reported by the list vendor did not match the disappointing results observed by the company, and the vendor was unable to provide advice or support to improve performance. No new member sales were generated from the campaign. Another case of money and time wasted on a traditional email list rental provider. Hyper-Targeted Email Advertising and Expert Advice Prove to be a Winning Combination Fortunately, the company committed to trying again – this time with a full service targeted email advertising partner. They carefully researched and selected their next campaign. They were attracted by high integrity permission-based email address list, proven campaign management methodology, and high-touch customer service approach that includes email marketing best practices consulting. With Email List Rental, the fitness company's campaigns could use 150+ contact profile data selects to draw a precisely targeted list from 70 million fresh, opt-in consumer email addresses. To support a new membership drive for their Boston clubs, they targeted high......

Words: 379 - Pages: 2

Premium Essay

Research in Motion

...ON HYPER CITY RETAIL INDIA LTD(AMRITSAR). I wish to take this valuable opportunity to express my sincere thanks to Hyper City Retail India Ltd for providing me a chance of learning. The project not only helped me to understand retail industry in India in depth but widened my vision in general management too by virtue of being associated with an excellent and professional organization. Words perhaps fail to express the gratitude and special thanks I owe to Mr Sonu Dua (Sr. Lecturer) who is my project guide, who helped me while preparing my summer training report and guide, who helped me while preparing my summer training report and giving guidance whenever required. The project would not have been complete without the guidance of Mr. Nitin Chubby (SOM), Deepak (Brand Staff), and Miss Kusum (Associate). Who was there to provide me the constant support and co-operation through every phase of the project. Finally this project would not have been possible without the confidence, endurance and support of my family. Thank you to all 2 TABLE OF CONTENT CHAPTER 1- INTRODUCTION 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Concept of Retailing Types Of Retail Outlets Retail Store Of Indian Economy FMCG Supply Chain Management And Retailing Need Of The Study Objective of The Study Literature Review CHAPTER 2 – COMPANY PROFILE 2.1 HYPER CITY Profile 2.2 HYPER CITY Vision 2.3 What HYPER CITY Model Defines 2.4 Organisational Structure 2.5 Store Location 2.6 Speciality Area of HYPER CITY 2.7 HYPER......

Words: 10923 - Pages: 44

Premium Essay

Management

...knowledge to survive hyper competitive market places? The rapid communication uses the age analysis to generate the revenue in the context that seeks to employ and examines which age group generates the most revenue. In this context it brings the general overview of which group partakes in the contributing the most markets for their products. In this context it helps the company to it brings and package the products in the most admirable way to the customers. The company can use the knowledge to effectively achieve the competitive power and explore the market effectively in the course of a strong hyper competitive market. This is through the overview that the company seeks to employ the various marketing system of the individuals purchasing patterns. This is because the customer usage analytics in the company management to explores the customers’ interests and services in the general consumption patterns of the products and services. This is through the overview that the company effectively employs the various ways of determining the various ways in which age groups and services generates the most revenue. When analyzed under this overview the company will be able to identify the most productive sector of the economy in terms of the most performing sector then effectively focus on the target market for the products and the services. 3. How is real-time computing contributing to hyper......

Words: 796 - Pages: 4

Free Essay

Nt1110 Uni2 Hyper-Threading

...Paul Gogue Unit 2 Hyper-Threading NT1110 Introduction: Hyper-threading or HT technology is used by some Intel microprocessor's which allows a single microprocessor to act like two seperate processors to the operating system and the application programs that use it. Hyper-threading enables the "core" processor to execute two concurrent threads of instruction sent by the OS. This simultaneous multi-thread implementation (SMT) is used to improve the parellization of computation, which means more work to be done by the processor during each clock cycle. To the OS the HT micro-processors appear to be seperate processors, because the HT technology uses two virtual core's for every physical core. A Multi-core processor is a single computing component with two or more independent core's, which usually means two or more CPUs/core's working together on the same chip. Multi-core technolgy adds physical CPUs/core's which are the units that read and execute program instructions, adding multiple core's enables multiple instructions to be executed. The multiple core's improves performance, reduces power consumption, an more efficient simultaneous processing of multiple tasks amenable to parellel computing. Summary: Hyper threading stopped with P4 because there wasn't enough spare capacity in the core for the second thread. Each thread gets less cache to work with, and if they simultaneously need memory or the same resources, switching back and forth between......

Words: 547 - Pages: 3

Premium Essay

History of Costumes

...ANCIENT EGYPT 1. http://upload.wikimedia.org/wikipedia/commons/thumb/7/78/Porteuse_d'offrandes_1_XI%C2%B0dynastie.gif/145px-Porteuse_d'offrandes_1_XI%C2%B0dynastie.gif 2. http://fashionchoice.org/wp-content/uploads/2012/05/ancient-egyptian-fashion-black-egyptian-dress-img-2-322x483.jpg KALASARIS , one of the Egyptian clothing, was worn by both men and women. This is a long robe which covers the neck part and it could be sleeveless or have short and narrow or wide and full sleeveless. The left image shows us a statue of a typical Egptian woman who wear Kalasaris with a geomatric patterns and she also wear some jewelleries. While the right image shows us a black dress that looks similar with the Kalasaris. It is one of the collection of a well-known designer. How can we say that the right image has a common similarities with Kalasaris? We can determine it from the neck part and also the gold embroidery patterns. In the ancient times, gold color was very common and even become a symbolic color for the Egyptians. The jewelleries can also become one of the determinants. They use jewelleries in their daily lives. MESOPOTAMIA SUMERIAN 1. http://codex99.com/typography/images/ancient/scribe_sm.jpg 2.......

Words: 3465 - Pages: 14

Premium Essay

Windows Server 2012 Hyper V 3.0

...Feature Comparison Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Contents Introduction ............................................................................... 4 More Secure Multitenancy ..................................................... 5 Flexible Infrastructure .............................................................. 9 Scale, Performance, and Density ....................................... 13 High Availability ..................................................................... 18 Processor and Memory Support ....................................... 24 Network ................................................................................... 24 Storage ..................................................................................... 25 Manageability ......................................................................... 25 Feature Comparison: Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V 2 Copyright information © 2012 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. You may modify this document for your internal, reference......

Words: 4029 - Pages: 17

Premium Essay

Project Server Virtual Machine Setup Guide

...Setting up the Host Machine for the SharePoint, Office and Project 2010 Virtual Machines V4.0 "The example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, email address, logo, person, places, or events is intended or should be inferred." System Requirements Requirement | Item | Operating System | Microsoft Windows® Server 2008 R2 with the Hyper-V role enabled | Drive Formatting | NTFS | Processor | Intel VT or AMD-V capable | RAM | 4 GB or more (8 GB or more recommended) | Hard disk space required for install | 50 GB | Virtual Machine Software and Pre-Configuration Virtual machine “a” contains the following pre-configured software: 1. Windows Server 2008 R2 Standard Evaluation Edition x64, running as an Active Directory Domain Controller for the “CONTOSO.COM” domain with DNS and WINS 2. Microsoft SQL Server 2008 R2 Enterprise Edition with Analysis, Notification, and Reporting Services 3. Microsoft Office Communication Server 2007 R2 4. Visual Studio 2010 5. Microsoft SharePoint Server 2010 Enterprise Edition 6. Microsoft Office Web Applications 7. FAST Search for SharePoint 2010 8. Microsoft Project Server 2010 9. Microsoft Office Professional Plus 2010 10. Microsoft Visio 2010 11. Microsoft Project 2010 12. Microsoft Office Communicator 2007......

Words: 2121 - Pages: 9

Free Essay

Into to Networking

...VMware’s vSphere and Microsoft's Hyper-V are the two heading stages for server virtualization. Despite the fact that they give comparative gimmicks and usefulness, the way they oversee capacity for virtual machines is altogether different. VMware’s VSphere is a type 1 hypervisor and is based on a microkernel used to run features that are need to support the virtualization. Type 1 hypervisors run straightforwardly on server fittings and go about as the deliberation layer between the physical assets of the server and the virtual assets relegated to the virtual machine. Hyper-V is conveyed in two structures, either as a standalone Type 1 hypervisor (known as Hyper-V Server 2012 R2 in the most recent discharge) or as a component of the Windows Server working framework, where the Hyper-V gimmick is actualized as a "part". On both stages, the hypervisor oversees physical capacity assets and the presentation of capacity to the virtual machines. A virtual machine is embodied in various documents that speak to both the arrangement and substance of the virtual server. Hyper-V and the vsphere stage utilize the idea of a virtual hard plate, which is similar to the physical hard drive in a standard server. VSphere utilizes the VMDK (virtual machine circles) design, though Hyper-V uses VHD and VHDX (virtual hard circle and virtual hard plate amplified). Both stages additionally utilize various extra documents to track virtual machine design and to handle things when virtual machines are......

Words: 918 - Pages: 4

Free Essay

Hyper Reality - Jean Baudrillard

...Jean Baudrillard’s Theory of Hyper-reality One of the leading figures in postmodernism is Jean Baudrillard. Baudrillard began his analysis with Marxism and modernity, and developed what he considered a more radical approach – a society of simulations, implosions and hyper-reality, where it is difficult to distinguish image from reality and where signs and simulations have become society. Baudrillard considers society to have entered a new era. Society is no longer based upon the production of material goods, but upon the selling of signs and images, its culture is of “Simulacrum”. He also suggests that these signs and images have little or no relationship to reality. He sees the postmodern society consisting of an exchange of images that he refers to as “simulacra”. These simulacrums are images of things that do not, or never have existed. In his essay “Precession of Simulacra” Baudrillard states that what has happened in postmodern culture is that society has become so reliant on signs, models, images and maps that people have lost all contact with the real world that before preceded the map. Reality itself has begun merely to imitate the model and images, which now precedes and determines the real world: "The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory—precession of simulacra—that engenders the territory" (Baudrillard 1). Furthermore according to him, when it comes to postmodern simulation and......

Words: 476 - Pages: 2

Premium Essay

Info Technology

...Business Research Report Information Technology in Marketing Table of Contents Executive Summary……………………………………………………………………………………………………….3 Introduction………………………………………………………………………………………………………………….4 Research Findings………………………………………………………………………………………………………….5 Hyper Targeting………………………………………………………………………………………………………….5 Social Media……………………………………………………………………………………………………………….6 Customer Relationship Management (CRM)……………………………………………………………….7 Recommendations…………………………………………………………………………………………………………8 Conclusion…………………………………………………………………………………………………………………….8 References…………………………………………………………………………………………………………………….9 Executive Summary This report examines new information technology that is available to help marketers. People spend so much time online these days that it is almost impossible to market without using technology and still reach a large number of consumers. The three technologies we chose to study for this are hyper targeting, social media and customer relationship management (CRM). Hyper targeting is the ability to advertise to certain people or groups of people based on specific criteria. Hyper targeting was first heard of through MySpace and involves using information like age, sex, location and interests to determine who to advertise what content to. Social media, dating sites and shopping sites all use given registration information to learn more about consumers. The more companies know about people, the better they can......

Words: 2150 - Pages: 9

Premium Essay

Hyper Inflation1923

...A. Introduction The Hyperinflation from 1923 is a historical event that is very famous among the German population. Almost everyone knows the picture with the black board, where a loaf of bread is being offered for 1 Billion German Mark. There is a fascination for the utopian high amounts of money, but people are scared at the same time that history will repeat itself. You can see this phenomenon especially when the EZB (European Central Bank) announced to unlimitedly deal in credits to bail out the Euro. However, the inflation rate was still far away from the 50% p.a. that would cause a hyperinflation. Therefore, it is important to deal with the hyperinflation that was caused due to high national debt and the consequences of the First World War. However, the inflation did not devalue money suddenly, but emerged since the First World War. We have to ask ourselves, whether the German government could have reacted differently, because the coming hyperinflation was recognized by some people already at 1920. Maybe the inflation could have been avoided, but on what terms? I will answer those questions by analyzing the different causes of inflation. I will shortly address the decisions of the German government in the process, which I will explain at the end. I shortly outlined the causes in order to judge the correctness of politicians’ decisions. This paper is mainly arranged chronically, so that the preconditions for a hyperinflation will be explained first and the main......

Words: 291 - Pages: 2

Free Essay

Term Paper

...compare Microsoft’s Hyper-V to VMware’s vSphere 5.5. Both have unlimited number of managed OSE’s per license and Hyper-V also has unlimited number of licenses per host whereas vSphere 5.5 has none. You get 2 physical CPUs per license with Hyper-V and only 1 with VMware which is a reason why the vSphere 5.5 is considerably higher. When looking at all the other licenses, Microsoft offers more with the purchase whereas they have to be bought separately with vSphere 5.5. This contributes to the higher pricing. When comparing the virtualization scalability VMware has a maximum active VMS per host of 512 whereas Hyper-V has 1,024. The maximum number of physical hosts per cluster is 32 for VMware compared to 64 for Hyper-V and the maximum number of VMs per cluster is 4,000 for vSphere 5.5 whereas it is 8,000 for Hyper-V. Hyper-V offers bare metal deployment of new storage host and clusters whereas vSphere 5.5 does not. Both systems offer integrated application load balancing for scaling-out application tiers but with the vSphere 5.5 it comes at an additional cost. All of the other area are the same, although Hyper-V may offer a better virgin. When comparing VM portability, high availability and disaster recovery, vSphere 5.5 does not have live migration using compression of VM memory state, live migration over RDMA-enabled network adapters, or live migration of VMs clustered with Windows server failover clustering (MSCS guest cluster). Surprisingly Hyper-V does not have......

Words: 544 - Pages: 3

Free Essay

Hyper-Presidency

...Sarkozy Hyper-presidency Student’s Name Institutional Affiliation Sarkozy Hyper-presidency France political governance has experienced numerous leaders with different approaches to its leadership. Nicholas Sarkozy is one of the political leaders who has made himself a name in the history of France. Sarkozy political career started at an early age with a win in the Neuilly Mayor position. Sarkozy was elected as the president in May 2007 through a party that was known as a UMP ( Union for a Popular Movement) becoming the sixth president of the French Republic. Although upon his election his party did not obtain majority compared to the opposition, he launched his reforms and included politicians from the opposition in his government. The paper, therefore, seeks to examine why Sarkozy was referred to as a hyper-president as well as establish whether the presidency is still powerful as compared to the past. Sarkozy title of hyper-president resulted after numerous accomplishments he made during his tenure as president. The president changed the leadership style that other presidents had used in the past, resulting in a better profile of the country among the European countries. Sarkozy did take the first courageous step to run the government as per the new quinquennial institutional rules. The president was the......

Words: 566 - Pages: 3

Free Essay

Autosomal Dominant Hyper Ige Syndrome - a Stat3 Mutation

...Autosomal Dominant Hyper IgE Syndrome, a STAT3 Mutation Lydia Sofia Moutsiou, W1574096 Hyper-immunoglobulin E syndrome (HIES), is a rare primary immunodeficiency characterized by elevated serum levels of IgE, pneumonia, recurrent eczema, eosinophilia, and staphyloccocal skin infections. It is estimated to affect 1:100,00 and it can inherited by be either an autosomal dominant (AD) mutation in STAT3 gene, or autosomal recessive mutation in DOCK8 gene (Grimbacher, Holland, & Puck, Hyper-IgE syndromes, 2005). History The autosomal dominant HIES was first described in 1966 by Davis, Schaller and Wedgwood. They investigated a rare case of two red-haired girls with recurrent eczema, pneumonia and staphylococcal skin infections and referred to it as Job’s syndrome (Davis, Schaller, & Wedgwood, 1966). Later on, in 1972, Buckley et al. worked on a case of two boys presenting the same symptoms as well as eosinophilia, severe dermatitis and elevated serum IgE levels and referred to it as Buckley’s syndrome (Buckley, Wray BB, & Belmaker, 1972). Further research in the first case of the two girls confirmed that they also had high serum IgE levels and abnormalities in neutrophil chemotaxis suggesting that both Davis and Buckley were describing the same syndrome (Hill, et al., 1974). For many years the condition was only associated with the immune system. In 1999, Grimbacher et al., proved that autosomal dominant HIES also affects the skeletal and connective tissue in a......

Words: 2720 - Pages: 11

Free Essay

It Guide

...[pic] Step-by-Step Guide to Getting Started with Hyper-V Microsoft Corporation Published: December 2007 Author: Kathy Davies Editor: Ron Loi Abstract This guide helps you become familiar with Hyper-V™ by providing instructions on installing this new technology and creating a virtual machine. [pic] Copyright Information This document supports a preliminary release of a software product that may be changed substantially prior to final commercial release, and is the confidential and proprietary information of Microsoft Corporation. It is disclosed pursuant to a non-disclosure agreement between the recipient and Microsoft. This document is provided for informational purposes only and Microsoft makes no warranties, either express or implied, in this document. Information in this document, including URL and other Internet Web site references, is subject to change without notice. The entire risk of the use or the results from the use of this document remains with the user. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this......

Words: 1544 - Pages: 7