# Information Capacity & Channel Coding MCQs in ITCTCN For All Exams

## Information Capacity & Channel Coding MCQs in Information theory coding technique & communication networks For All competitive, university &  SPPU online exams 2020

Before solving this i suggest read this to easy this objectives

1. Code rate r, k information bits and n as total bits, is defined as

a. r = k/n
b. k = n/r
c. r = k * n
d. n = r * k

Ans - A

2. For a (7, 4) block code, 7 is the total number of bits and 4 is the number of
a. Information bits
b. Redundant bits
c. Total bits- information bits
d. None of the above

Ans - A

3. In digital communication system, if both power and bandwidth are limited, then which mechanism/choice is preferred?
a. Power efficient modulation
b. Bandwidth efficient modulation
c. Error control coding
d. Trellis coded modulation

Ans - C

4. The capacity of a channel is :
a. Number of digits used in coding
b. (b) Volume of information it can t
c. Maximum rate of information transmission
d. (d) Bandwidth required for information

Ans - C

5. Channel capacity is exactly equal to –
a. bandwidth of demand
b. Amount of information per second
c. Noise rate in the demand
d. none

Ans - B

6. Parity bit coding may not be used for
a. Error in more than single bit
b. Which bit is in error
c. Both a & b
d. None of the above

Ans -C

7. Which among the following is/are the essential condition/s for a good error control coding technique?
a. Faster coding & decoding methods
b. Better error correcting capability
c. Maximum transfer of information in bits/sec
d. All of the above

Ans - D

8. In a linear code, the minimum Hamming distance between any two code words is ______minimum weight of any nonzero code word.
a. Less than
b. Greater than
c. Equal to
d. None of the above

Ans - C

9. Which among the following represents the code in which codewords consists of message bits and parity bits separately?
a. Block Codes
b. Systematic Codes
c. Code Rate
d. Hamming Distance

Ans - B

10. For hamming distance dmin and number of errors D, the condition for receiving invalid codeword is
a. D ≤ dmin + 1
b. D ≤ dmin– 1
c. D ≤ 1 – dmin
d. D ≤ dmin

Ans- B

11. The minimum distance of linear block code (dmin) is equal to minimum number of rows or columns of HT, whose _____ is equal to zero vector.
a. sum
b. difference
c. product
d. divison

Ans - A

12. According to linearity property, the ________ of two code words in a cyclic code is also a valid code word.
a. sum
b. difference
c. product
d. division

Ans - A

13. With respect to power-bandwidth trade-off, for reducing the transmit power requirement, the bandwidth needs to be ________.
a. Increased
b. Constant
c. Decreased
d. None of the above

Ans - A

14. Error Correction and Error Detection is related to:
a. source coding
b. channel coding
c. cryptography
d. None of the above

Ans - B

15. Graphical representation of linear block code is known as
a. Pi graph
b. Matrix
c. Tanner graph
d. None of the above

Ans - A

16. A linear code
a. Sum of code words is also a code word
b. All-zero code word is a code word
c. Minimum hamming distance between two code words is equal to weight of any non zero code word
d. All of the above

Ans - D

17. For hamming distance dmin and t errors in the received word, the condition to be able to correct the errors is
a. 2t + 1 ≤ dmin
b. 2t + 2 ≤ dmin
c. 2t + 1 ≤ 2dmin
d. Both a and b

Ans - D

18. The minimum distance for unextended Golay code is
a. 8
b. 9
c. 7
d. 6

Ans - C

19. Orthogonality of two codes means
a. The integrated product of two different code words is zero
b. The integrated product of two different code words is one
c. The integrated product of two same code words is zero
d. None of the above

Ans - A

20. In Minimum Distance Separable (MDS) codes, the minimum distance is one more than the number of _________.
a. Information bits
b. Symbol bits
c. Parity check bits
d. None of the above

Ans - C

21. The Golay code (23,12) is a codeword of length 23 which may
correct
a. 2 errors
b. 3 errors
c. 5 errors
d. 8 errors

Ans - B

22. The capacity of Gaussian channel is
a. C = 2B(1+S/N) bits/s
b. C = B2(1+S/N) bits/s
c. C = B(1+S/N) bits/s
d. C = B(1+S/N)2 bits/s

Ans - C

23. According to Shannon Hartley theorem,
a. The channel capacity becomes infinite with infinite bandwidth
b. The channel capacity does not become infinite with infinite bandwidth
c. Has a tradeoff between bandwidth and Signal to noise ratio
d. Both b and c are correct

Ans - D

24. In Frame Check Sequence (FCS), which code is used if character length is 6 bit and generates 12 bit parity check bits?
a. CRC-12
b. CRC-16
c. CRC-32
d. CRC-CCITT

Ans - A

25. The capacity of a band-limited additive white Gaussian (AWGN) channel is given by 𝐶 = 𝑊𝑙𝑜𝑔2 (1 + 𝑃 𝜎2) bits per second(bps), where W is the channel bandwidth, P is the average power received and σ 2 is the one-sided power spectral density of the
AWGN. For a fixed 𝑃 𝜎2 = 1000, the channel capacity (in kbps) with infinite bandwidth (𝑊 → ∞) is approximately
a. 1.44
b. 1.08
c. 0.72
d. 0.36

Ans - A

26. If the channel is bandlimited to 6 kHz & signal to noise ratio is 16, what would be the capacity of channel?
a. 15.15 kbps
b. 24.74 kbps
c. 30.12 kbps
d. 52.18 kbps

Ans - B

27. For a Gaussian channel of 1 MHz bandwidth with the signal power to noise spectral density ratio of about 104 Hz, what would be the maximum information rate?
a. 12000 bits/sec
b. 14400 bits/sec
c. 28000 bits/sec
d. 32500 bits/sec

Ans - B

Before solving this i suggest read this to easy this objectives

If you like this then comment good and share with your friends

THANKS! Stay with us