New converses in the theory of identification via channels

  • Authors:
  • Y. Steinberg

  • Affiliations:
  • Dept. of Electr. Eng., Ben Gurion Univ. of the Negev, Beer Sheva

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

New converses for identification via arbitrary single-user and multiple-access channels, with finite first- and second-type probabilities of error, are developed. For the arbitrary single-user channel, it is shown that (λ1, λ2)-identification capacity is upper-bounded by λ-capacity, and optimistic (λ1,λ2 )-identification capacity is upper-bounded by optimistic λ-capacity, for any λ>λ1+λ2. The bounds become tight at the limit of the vanishing probabilities of error, thus generalizing previous results by Han and Verdu (1992), who showed that the identification capacity is equal to transmission capacity for channels satisfying the strong converse of the channel coding theorem. A by-product of the new identification converses is a general formula for optimistic λ-capacity. An outer bound on the (λ1, λ2)-identification capacity region of an arbitrary multiple-access channel is developed. A consequence of this bound is that the identification capacity region is equal to the transmission capacity region for any stationary, finite-memory multiple-access channel. The key tool in proving these bounds is the partial resolvability of a channel, a new notion in resolvability theory, which deals with approximation of the output statistics on a suitably chosen part of the output alphabet. This notion of approximation enables us to get sharp bounds on identification for arbitrary channels, and to extend these bounds to the multiple-access channel