The Capacity Region of the Strong Interference Channel With Common Information

Similar documents
Block Markov Encoding & Decoding

The Z Channel. Nihar Jindal Department of Electrical Engineering Stanford University, Stanford, CA

Multi-user Two-way Deterministic Modulo 2 Adder Channels When Adaptation Is Useless

5984 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 12, DECEMBER 2010

Joint Relaying and Network Coding in Wireless Networks

Capacity and Cooperation in Wireless Networks

CORRELATED data arises naturally in many applications

COOPERATION via relays that forward information in

Interference Mitigation Through Limited Transmitter Cooperation I-Hsiang Wang, Student Member, IEEE, and David N. C.

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007

DEGRADED broadcast channels were first studied by

SHANNON S source channel separation theorem states

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7, JULY This channel model has also been referred to as unidirectional cooperation

Optimal Power Allocation over Fading Channels with Stringent Delay Constraints

Power and Bandwidth Allocation in Cooperative Dirty Paper Coding

State Amplification. Young-Han Kim, Member, IEEE, Arak Sutivong, and Thomas M. Cover, Fellow, IEEE

On Secure Signaling for the Gaussian Multiple Access Wire-Tap Channel

Symmetric Decentralized Interference Channels with Noisy Feedback

State of the Cognitive Interference Channel

On the Capacity Region of the Vector Fading Broadcast Channel with no CSIT

Multicasting over Multiple-Access Networks

On the Capacity Regions of Two-Way Diamond. Channels

WIRELESS or wired link failures are of a nonergodic nature

SHANNON showed that feedback does not increase the capacity

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 4, APRIL

Degrees of Freedom of the MIMO X Channel

CONSIDER a sensor network of nodes taking

Efficient Multihop Broadcast for Wideband Systems

Degrees of Freedom in Multiuser MIMO

Distributed Broadcast Scheduling in Mobile Ad Hoc Networks with Unknown Topologies

MOST wireless communication systems employ

IN RECENT years, wireless multiple-input multiple-output

On the Achievable Diversity-vs-Multiplexing Tradeoff in Cooperative Channels

Error Performance of Channel Coding in Random-Access Communication

Computing and Communications 2. Information Theory -Channel Capacity

A Bit of network information theory

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

Wireless Network Information Flow

Bounds on Achievable Rates for Cooperative Channel Coding

How (Information Theoretically) Optimal Are Distributed Decisions?

Exploiting Interference through Cooperation and Cognition

IN recent years, there has been great interest in the analysis

Rab Nawaz. Prof. Zhang Wenyi

Frequency hopping does not increase anti-jamming resilience of wireless channels

On Coding for Cooperative Data Exchange

Broadcast Networks with Layered Decoding and Layered Secrecy: Theory and Applications

Scheduling in omnidirectional relay wireless networks

Degrees of Freedom in Adaptive Modulation: A Unified View

arxiv: v1 [cs.it] 26 Oct 2009

Diversity Gain Region for MIMO Fading Multiple Access Channels

The Multi-way Relay Channel

Capacity and Optimal Resource Allocation for Fading Broadcast Channels Part I: Ergodic Capacity

Coding Techniques and the Two-Access Channel

WIRELESS communication channels vary over time

State-Dependent Relay Channel: Achievable Rate and Capacity of a Semideterministic Class

Rate Allocation for Serial Concatenated Block Codes

The Reachback Channel in Wireless Sensor Networks

OPTIMAL POWER ALLOCATION FOR MULTIPLE ACCESS CHANNEL

Degrees of Freedom of Bursty Multiple Access Channels with a Relay

SPACE TIME coding for multiple transmit antennas has attracted

Capacity of Two-Way Linear Deterministic Diamond Channel

Forwarding Strategies for Gaussian Parallel-Relay Networks

Cooperation in Wireless Networks

Computing functions over wireless networks

Capacity Gain from Two-Transmitter and Two-Receiver Cooperation

A unified graphical approach to

Analog network coding in the high-snr regime

Overlay Systems. Results around Improved Scheme Transmission for Achievable Rates. Outer Bound. Transmission Strategy Pieces

4118 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 12, DECEMBER Zhiyu Yang, Student Member, IEEE, and Lang Tong, Fellow, IEEE

CODE division multiple access (CDMA) systems suffer. A Blind Adaptive Decorrelating Detector for CDMA Systems

BANDWIDTH-PERFORMANCE TRADEOFFS FOR A TRANSMISSION WITH CONCURRENT SIGNALS

THE Shannon capacity of state-dependent discrete memoryless

When Network Coding and Dirty Paper Coding meet in a Cooperative Ad Hoc Network

Linear time and frequency domain Turbo equalization

Hamming Codes as Error-Reducing Codes

OFDM Transmission Corrupted by Impulsive Noise

Wireless Network Coding with Local Network Views: Coded Layer Scheduling

Energy Efficiency in Relay-Assisted Downlink

On Fading Broadcast Channels with Partial Channel State Information at the Transmitter

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 6, JUNE

SNR Estimation in Nakagami Fading with Diversity for Turbo Decoding

Relay Scheduling and Interference Cancellation for Quantize-Map-and-Forward Cooperative Relaying

On Information Theoretic Interference Games With More Than Two Users

Feedback via Message Passing in Interference Channels

Optimization of Coded MIMO-Transmission with Antenna Selection

Degrees of Freedom of Multi-hop MIMO Broadcast Networks with Delayed CSIT

Cooperative Tx/Rx Caching in Interference Channels: A Storage-Latency Tradeoff Study

Reflections on the Capacity Region of the Multi-Antenna Broadcast Channel Hanan Weingarten

Throughput-optimal number of relays in delaybounded multi-hop ALOHA networks

SNR Scalability, Multiple Descriptions, and Perceptual Distortion Measures

Iterative and One-shot Conferencing in Relay Channels

Causal state amplification

I. INTRODUCTION. Fig. 1. Gaussian many-to-one IC: K users all causing interference at receiver 0.

Performance Analysis of Maximum Likelihood Detection in a MIMO Antenna System

High-Rate Non-Binary Product Codes

Two Models for Noisy Feedback in MIMO Channels

Data Flow 4.{1,2}, 3.2

Approximately Optimal Wireless Broadcasting

Adaptive Resource Allocation in Wireless Relay Networks

Coding for Noisy Networks

Transcription:

The Capacity Region of the Strong Interference Channel With Common Information Ivana Maric WINLAB, Rutgers University Piscataway, NJ 08854 ivanam@winlab.rutgers.edu Roy D. Yates WINLAB, Rutgers University Piscataway, NJ 08854 ryates@winlab.rutgers.edu Gerhard Kramer Bell Labs, Lucent Technologies Murray Hill, New Jersey 07974 gkr@bell-labs.com Abstract Transmitter cooperation enabled by dedicated links allows for a partial message exchange between encoders. After cooperation, each encoder knows a common message partially describing the two original messages, and its own private message containing the information that the encoders were not able to exchange. We consider the interference channel with both private and common messages at the encoders. A private message at an encoder is intended for a corresponding decoder as the common message is to be received at both decoders. We derive conditions under which the capacity region of this channel coincides with the capacity region of the channel in which both private messages are required at both receivers. We show that the obtained conditions are equivalent to the strong interference conditions determined by Costa and El Gamal for the interference channel with independent messages. I. INTRODUCTION A problem in which encoders partially cooperate in a discrete memoryless channel was proposed by Willems for a multiple access channel (MAC) [1]. To model the transmitter cooperation, two communication links with finite capacities are introduced between the two encoders. The amount of information exchanged between the two transmitters is bounded by the capacities of the communication links. The proposed discrete channel model enables investigation of transmitter cooperation gains. For a Gaussian network with two transmitters and two receivers, improvements in the achievable rates due to node cooperation were demonstrated in [2] [6]. In [2], the transmitters fully cooperate by exchanging their intended messages and then jointly encode them using dirty paper coding. Other cooperation schemes were analyzed in [3] [5]. In the discrete memoryless MAC with partially cooperating encoders [1], the outcome of the cooperation is referred to as a conference. Willems determined the capacity region of this channel and thus specified the optimum conference. His result was recently extended to a compound MAC in which two decoders wish to decode messages sent from the encoders [7]. The same form of conference as in [1] was shown to remain optimal. When cooperating over the links with finite capacities, encoders obtain partial information about each other s messages. This information is referred to as the common message as it is known to both encoders after the conference. In addition, 1 This work was supported by NSF Grant NSF ANI 0338805. W 1 W0 W2 ENCODER 1 ENCODER 2 Fig. 1. X1(W 1,W 0 ) p(y1,y2 x1,x2) Y1 DECODER 1 ^ ^ (W 1,W 0 (1)) ^ ^ X2 (W 2,W 0 ) Y2 (W 2,W 0 (2)) DECODER 2 Interference channel with common information. each encoder will still have independent information referred to as the private message, as this message remains unknown to the other encoder. Both common and private messages are decoded at a single decoder in the case of the MAC [1], or at both receivers in the case of a compound MAC [7]. In this paper, we consider the communication situation in which two encoders each have a private message and a common message they wish to send. Each decoder is interested in only one private message sent at the corresponding encoder. Both decoders wish to decode the common message. We refer to this channel as an interference channel with common information, denoted! ". The communication system is shown in Figure 1. Without common information, this channel reduces to the interference channel [8], [9] for which the capacity region is known in the case of strong interference [10] satisfying %'&)( + %&)(, + %&( (1) %&(- (2) for all product distributions on the inputs and. The capacity region in this case coincides with the capacity region of the two-sender, two-receiver channel in which both messages are decoded at both receivers, as determined by Ahlswede [11]. In this paper, we determine the capacity region of interference channels with a common message if %&)( /.0 %&)(, /.0 &( 1.0 (3) &(- 1.0 (4) for all joint distributions 2 %32 that factor as 3% 3-4 3-42 %. We further show that this class of interference channels is same as those determined by (1) and (2) with independent $5 and $6.

@ II. CHANNEL MODEL AND STATEMENT OF RESULT The channel consists of finite sets ) '! and a conditional probability distribution 2 % ' - ',. Symbols % 78 9 are channel inputs and % 7: ; are the corresponding channel outputs. Each encoder, 9 )>, wishes to send a private message?a@b7dc FE"EFE/GH@)I to decoder in J channel uses. In addition, a common message?lkm7hc FE"EFE/GHKI needs to be communicated from the encoders to both decoders, as shown in Figure 1. The channel is memoryless and time-invariant in the sense that 2 % 1N O N O- P O P O Q OSRT Q OSR- U K UV U+ (5) W'XYWZ[\!X]\^Z /N O FN OT - /N O, FN O_ P O a` @ N 'befefe"c @ N Od and W X W Z [\ X \ Z Yef is the channel probability distribution. We are here following the convention of dropping subscripts of probability distributions if the arguments of the distributions are lower case versions of the corresponding random variables. To simplify notation, we drop the superscript when g Indexes? K,?H and?a are independently generated at the beginning of each block of J channel uses. An encoder : /> maps the common message? K and the private message? @ into a codeword P @ P ih?ak/? (6) P ih?ak/? 1E (7) Each decoder J. estimates the common message?lk and the private message? @ based on the received J -sequence j? K 1 j?k " ml %n8 (8) j?lks >1 j? ml %n 1E (9) An GHK)G )G J8)o2p code for the channel consists of two encoding functions h, h, two decoding functions l, l and a maximum error probability o2p N @ u vxwy N w X N w Zz o p+qir6s't Co p N o p N 'I (10) G K G{ GB om l @1 %n}@ ~ U+KU@Y x %U K UV 'U " sent! )> E (11) A rate triple K 0 ') is achievable if, for any ƒ, there is an G K )G )GB 'J8)o p code such that o p and Ĝ ^ m>+ -ŒŽ The capacity region of the interference channel with common information is the closure of the set of all achievable rate triplets K ). The next theorem is the main result of this paper. It gives conditions under which the capacity region coincides with the capacity region of the channel in which both private messages are required at both receivers. /> E Theorem 1: For an interference channel %S, 1! " with common information satisfying %'&)( + %&)(, + %&( (12) %&(- (13) K C. K C for all product distributions on $5 and $6, the capacity region is given by CS ) ' Mm %&)( /.0 (14) Mm %$6 &)( S $5 '/.0 (15) 0 š{ rœ ž $Ÿ '$6 &)( 1 %$5 '$6 &)(.01I (16) Mm B 0 šb rœ ž $Ÿ '$6 &)( F1 $Ÿ '$6 &)( /I (17) the union is over joint distributions 3 that factor as 2 %3-42 % 3-42 % 3%, 1E (18) III. THE MAC WITH COMMON INFORMATION AND ACHIEVABILITY The interference channel with common information is closely related to a discrete channel model in which private and common messages are transmitted to a single receiver, referred to as the MAC with common information [12]. The capacity region of this channel,, was shown in [12] and [13] to be 9 - CS K ) M 0 $Ÿ &( $6 1.0 M $œ &( $5 1.0 0 šb 0 %$5 $6 &)(.0 M K{ { %$ &)( /I (19) the union is over all 2 %32- ', that factor as 3% 3-4, S 3-42 %T,. (In [13] the convex hull operation used in [12] was shown to be unnecessary). In this paper we analyze a channel model with two receivers. When each receiver wishes to decode both private messages and the common message, the considered channel becomes a compound MAC with common information. This channel defines two MAC channels with common information % @ " @, one for each receiver ', u (20) Z SZ and 2 % u X/ X % 1E (21) As described in [7, Section IV], the encoding and decoding strategy proposed by Willems in [13] can be adopted for the

compound MAC with common information to guarantee the achievability of rates MACt, CMAC C MAC1 ª MAC2 I (22) )> satisfies the bounds (19) with ( replaced by ( @, and the union is over all %3-42 %- 3%, 3-4 ". We remark that under the conditions (12) and (13), the regions (14)-(17) and (22) are the same. Consider next the strong interference channel with common information. The achievability of the rates of Theorem 1 in the case in which both messages are required at the receivers guarantees that these rates are also achieved when a weaker constraint of decoding of a single message is imposed at the receivers. Hence the proof of achievability in Theorem 1 is immediate. We next prove the converse. IV. CONVERSE Consider a code G K /G{ /Ĝ )J8o p for the interference channel with common information. Applying Fano s inequality results in? K /?k n8 + o p ž, G K G T{ o p " q JA± 1N (23)? K )?² n} mo p x, G K Ĝ + T o p ' q JL±" N E (24) ±@ N ³ as o^ṕ @ ³ (or as o^p ³ ). It follows that?ak/? n? K )?² S nµ "?LK n? K n} T? n /?LK' JA± 1N (25)?² n} )? K mjl±" N E (26) Since conditioning cannot increase entropy, from (26) it follows that?a S n} /? K )?H?A S nµ /? K + JA±F FN E (27) To prove the converse, we will use the data processing inequality for the following Markov chains: Lemma 1: The following form Markov chains for the interference channel with a common message:?k ³ µ ')? K /?A ³ n8 (28)?A ³ 5 )? K /?k F ³ nµ (29)? K )? @ ³ @ )? K ³ n @ š /> E (30) We will need the following data processing inequality: Lemma 2: For a Markov chain? ³.+$² ³ (? &(.0 %$ &)(. 1E (31) Proof: See [14]. Applying Lemma 2 to the Markov Chains (29)- (30) and using (30) yields,? & n?ak/?? @ & n @? K? K /?k & n: " n @ & n @? K K & &?AK/? (32) (33)? µ n8 E (34) We first consider the bound (17) at the decoder. We have JB VK{ B?AK?? vž z? K )?H?A S? K )?H F? K )?H & n: "T?² & n} S? K )?H F vž¹? K /?k n8?a S nµ /? K )?H z?lk & n T & n?ak/? vžº? K /?k n8?a S nµ /? K )?H z? K : & n: F } & n}? K /?k F {JL± 1N JL± FN vž» z? K : & n: F } & n} S? K /?k ' : '? K )?H F {JL± 1N JL±F FN v p z?ak) & n & n?lk {JL± 1N JL± FN (35) ¼ follows from the independence of?lk)? /? ; ½ from (32) and (34); ¾F from (25) and (27); S and À' from (6). If & n?ak) + then it follows from (35) that & n?ak) (36) JB VK{ B?AK) & n & n )?AK' {JL± 1N {JL± FN?AK) ) & n T{JL± 1N {JL± FN µ ' } & n: BJL± /N BJL±" N YO )O&(- O BJA± 1N BJA± FN E (37) To prove that the bound (16) at the decoder is valid, we consider JB 0 šb?k "?² " vž z??ak??ak/??h & n:? K?² & n}? K /?k " vž¹?k n8 /? K?² n} )? K /?k " z & n?ak & n?lk)? vžº? n /?LK'? n )?AK/? z : & n:? K } & nµ? K )?H " {JL± /N {JL±" N vž» z '& n?akt & n?lk)??ak/? {JL± /N {JL± N v p z µ & n8? K T } & n} : '/? K {JL± /N {JL±" N E (38)

g J again ¼ follows from the independence of? K )?H /?A ; ½F from (32) and (33); ¾F from (25) and (27); S and À' from (6). Again, if (36) holds, then (38) becomes JB 0!{ µ & n8? K 5 & n: : '/? K {JL± 1N {JL±F FN µ } & n8? K T{JL± 1N {JL±F FN we have used. O $Ÿ O-$œ )O &( OT? K BJL± /N {JL±" N O )O&(- O. O BJA± 1N BJA± FN E?LK (39) FEFE"EJLE (40) The same approach as in (35) and (38) can be used to obtain JL JL JB š{ JB Ÿ Ÿ K $Ÿ O &)( OT $6 )O/.!O BJA± 1N (41) )O-&)(, )O O /. O BJA± FN (42) %$5 O$6 )O &( OT.šO {JL± /N {JL±" N (43) YO )O&( )O ŸJL± /N 5JL± FN E (44) We further can write (36) and its corresponding bound as & n )µ+ '& n Â: n n & Â: (45) & )µE (46) We can now proceed as in [10], [14] to show that the conditions (45) and (46) reduce to the per-letter conditions %$6 &)( S $Ÿ '/.0 %$5 &)( $œ /.0 $6 &)( $5 1.0 (47) $5 &)( $6 1.0 (48) to be satisfied for every distribution of the form (18). It was shown in [14] that (47) and (48) are equivalent to the strong interference conditions (1) and (2) thus proving Theorem 1. In the following, we present a more direct proof. Theorem 2: The condition 5 & n} S : )µ+ 5 & n8 µ Â: (49) is satisfied for all input distributions of the form (18) if and only if the vector version of the strong interference condition (2), given by & n + & n (50) is satisfied for all product distributions on and. Proof. To prove that (50) implies (49), we write the mutual information in (49) as 5 & n} S : )µ uã oä9 %3-5 & n} : ') We next observe that the following is a Markov chain Æ ] E (51) ²n OSR- ) OSRT OSR- ³ %$5 1N O-$6 FN O ³ ( 1N O ( N O (52) and therefore Q P P Q P P E (53) Furthermore, the same reasoning as in [13, Ç_E È,E ] shows that (6), (7) and (40) imply ³  ³, that is 2 %P 'PT S 2 %P 42 %PT S 1E (54) Since (50) holds for any input distribution on independent inputs, it follows that for all we have } & n} µ  i } & n: : ') i (55) and are independent. Inserting this bound into (51) gives the desired result. To show the other directions, we observe that since (49) holds for all input distributions of the form (54), it must also hold for  independent from. By choosing such distribution on ÂA) µ } we obtain (50). É From Theorem 2, it follows that conditions µ & n8 5 Â:+ : & n} 5 Â: when satisfied for all distributions as in (18) are equivalent to µ & n8 5 V µ & nµ 5 " satisfied for all independent inputs. Combining Theorem 2 and the Lemma in [10] we conclude that the conditions (46) and (45) reduce to the strong interference conditions (1) and (2) respectively, thus proving Theorem 1. É The equivalence of the per-letter conditions (3)-(4) and the strong interference conditions (1)-(2) relies on the fact that (3)-(4) must hold for all input distributions of the form (18). At this point, it is not clear if these conditions can be tightened to require a subset of distributions (for example only capacityachieving distributions) to satisfy the conditions (3)-(4). V. GAUSSIAN CHANNEL Consider the Gaussian interference channel in the standard form [9], [15] Y Y F, ) {Ê' (56), /,B, ) {Ê ) (57) the Ë @ are independent, zero-mean, unit-variance Gaussian random variables. The code definition is the same as that given in Section II with the addition of the power constraints xá Ì @ mo@1! )> E (58)

¼Ò From the maximum-entropy theorem [16, Thm. Í E Î EfÏ ] it follows that Gaussian inputs are optimal in Theorem 1. We have the following result. Corollary 1: When the strong interference conditions, 1 are satisfied, the capacity region of the Gaussian strong interference channel with common information is given by CS 0 ) + M 0 ÑÐ Ò ¼So^ " (59) M ÑÐ Ó Ò½1o "Ô (60) 0 šb 0 Õ Ö rm ž 1N / Ð Ó Õ ¼So^! Ò Õ Ò½/o2 Ô (61) M K { š{ 0 rœ x Õ Ð Ø Õ oš š Õ o {> Õ " Õ Ù ¼oš ½o "Ú2I (62) the union is over all ¼)½, for ²Û¼ˆ Aܽ6, k¼, Ò½ ˆ½, and ) ). For any choice of values ¼ and ½, the fraction of power allocated to send the common message at transmitters and > is ¼So^ and ½1o2, respectively. Private messages are transmitted with the remaining powers ¼oš Ò and Ò½1o2. [10] M. H. M. Costa and A. A. E. Gamal, The capacity region of the discrete memoryless interference channel with strong interference, IEEE Trans. on Inf. Theory, vol. 33, no. 5, pp. 710 711, Sept. 1987. [11] R. Ahlswede, The capacity region of a channel with two senders and two receivers, Annals of Probability, vol. 2, no. 5, pp. 805 814, 1974. [12] D. Slepian and J. K. Wolf, A coding theorem for multiple access channels with correlated sources, Bell Syst. Tech. J., vol. 52, pp. 1037 1076, 1973. [13] F. M. J. Willems, Informationtheoretical results for the discrete memoryless multiple access channel, Ph.D. dissertation, Katholieke Universiteit Leuven, Belgium, Oct. 1982. [14] I. Maric, R. D. Yates, and G. Kramer, The strong interference channel with common information, in Allerton Conference on Communications, Control and Computing, Sept. 2005. [15] G. Kramer, Outer bounds on the capacity of gaussian interference channels, IEEE Trans. on Inf. Theory, vol. 50, no. 53, pp. 581 586, Mar. 2004. [16] T. Cover and J. Thomas, Elements of Information Theory. John Wiley Sons, Inc., 1991. VI. DISCUSSION Communication systems with encoders that need to send both private and common information naturally arise when encoders can partially cooperate as in [1], [7]. After such cooperation, the common information consists of two indexes each partially describing one of the two original messages. The assumption of the model that the entire common message is decoded simplifies the problem. However, a receiver interested in a message from only one encoder, as is the case in the interference channel, will be interested in only a part of the common message. Understanding such communication problems appears to be much more challenging and is the subject of our future work. REFERENCES [1] F. M. J. Willems, The discrete memoryless multiple channel with partially cooperating encoders, IEEE Trans. on Inf. Theory, vol. 29, no. 3, pp. 441 445, May 1983. [2] N. Jindal, U. Mitra, and A. Goldsmith, Capacity of ad-hoc networks with node cooperation, in IEEE Int. Symp. Inf. Theory, 2004, p. 271. [3] A. Høst-Madsen, A new achievable rate for cooperative diversity based on generalized writing on dirty paper, in IEEE Int. Symp. Inf. Theory, June 2003, p. 317. [4], On the achievable rate for receiver cooperation in ad-hoc networks, in IEEE Int. Symp. Inf. Theory, June 2004, p. 272. [5], On the capacity of cooperative diversity, IEEE Trans. on Inf. Theory, submitted. [6] C. Ng and A. Goldsmith, Transmitter cooperation in ad-hoc wireless networks: Does dirty-paper coding beat relaying? in IEEE Inf. Theory Workshop, Oct. 2004. [7] I. Maric, R. D. Yates, and G. Kramer, The discrete memoryless compound multiple access channel with conferencing encoders, in IEEE Int. Symp. Inf. Theory, Sept. 2005. [8] H. Sato, Two user communication channels, IEEE Trans. on Inf. Theory, vol. 23, no. 3, p. 295, May 1977. [9] A. B. Carleial, Interference channels, IEEE Trans. on Inf. Theory, vol. 24, no. 1, p. 60, Jan. 1978.