Coding Techniques Important Questions
1 Consider a source X which generates four symbols with probabilities P( x1 ) 0.5 ,
P( x2 ) 0.3 , P( x3 ) 0.1 and P( x4 ) 0.1 . Find the
(i) source entropy,
(ii) average codeword length for prefix code {0, 10, 110, 111} corresponding
to x1 , x2, x3 and x4 respectively.
(iii) efficiency of the code.
2 Explain the properties of entropy.
3 Explain mutual information and its properties.
4 Consider the binary channel shown in Figure 1.1. Let the priori probabilities of
sending the binary symbol be p0 and p1 , where p0 p1 1. Find the posteriori
probabilities
P(X=0 | Y=0) and P(X=1| Y=1)
Figure 1.1
5 A discrete memoryless source has five symbols X1, X2, X3, X4 and X5 with
probabilities 0.4, 0.19, 0.16, 0.15 and 0.1 respectively. Calculate a Shannon – Fano
code for the source and code efficiency.
6 Explain briefly the source coding theorem.
7 The relative entropy or Kullback-Leibler distance between two probability mass
functions p(x) and q(x) is defined as
p( x)
D( p q) p( x) log
xX q( x)
(i) Show that D( p q ) is non-negative.
(ii) Show that the Kullback-Leibler distance does not follow the other two
properties of a distance, i.e., symmetry and the triangle inequality.
Show that I ( X ; Y ) D( p( x, y ) p( x) p( y )) . this would elegantly prove that mutual
information is non negative.
8 Consider a DMS with source probabilities {0.35, 0.25, 0.20, 0.15, 0.05}.
(i) Determine the Huffman code for this source.
(ii) Determine the average length R of the codewords.
(iii) What is the efficiency of the code?
9 Suppose a source produces independent symbols from the alphabet a1 , a2 , a3 , with
probabilities p1 0.4999999 , p2 0.4999999 , p3 0.0000002 .
(i) Compute the entropy, H ( X ) , of this source.
Coding Techniques Important Questions
(ii) Find the optimal code for this source, and compute its expected codeword
length.
(iii) Find an optimal code for the second extension of this source (i.e., for
blocks of two symbols), and compute its expected codeword length, and
the expected codeword length divided by two.
10 How syndrome is calculated in Hamming codes and cyclic codes?
11 What is generator polynomial and parity check polynomial??
12 1 0 0 1 1
For a (5,3) code over GF (4) , the generator matrix is given by G 0 1 0 1 2
0 0 1 1 3
(i) Find the parity check matrix for this code.
(ii) How many errors can this code detect?
(iii) How many errors can this code correct?
(iv) How many erasures can this code correct?
(v) Is this a perfect code?
13 Consider the (23, 12, 7) binary code. Show that if its used over a binary symmetric
channel (BSC) with probability of bit error p =0.01, the word error will be
approximately 0.00008.
14 Explain in detail, Cyclic codes.
15 Explain the Hamming codes with example.
16 Construct the addition and multiplication table for F[ x] / ( x 2 1) defined over
GF(2).
17 List out all the irreducible polynomials over
(i) GF(2) of degrees 1 to 5
(ii) GF(3) of degrees 1 to 3
18 Consider the polynomials f ( x) 2 x x 2 2 x 4 and g ( x) 1 2 x 2 2 x 4 x5 over
GF(3). Calculate f ( x) g ( x) and f ( x).g ( x) .
19 Let the polynomial g ( x) x10 x8 x5 x 4 x 2 x 1 be the generator polynomial
of a cyclic code over GF(2) with block length 15,
(i) Find the generator polynomial G.
(ii) Find the parity check matrix H.
(iii) How many errors can this code detect?
(iv) How many errors can this code correct?
(v) Write the generator matrix in the systematic form.
20 Consider the following generator matrix over GF (2)
1 0 1 0 0
G 1 0 0 1 1
0 1 0 1 0
(i) Generate all possible codewords using this matrix.
(ii) Find the parity check matrix, H.
Coding Techniques Important Questions
(iii) Find the generator matrix of an equivalent systematic code.
(iv) Construct a standard array for this code.
(v) What is the minimum distance of this code?
21 Construct the addition and multiplication table for F[ x] / ( x 2 1) defined over
GF(3).
22 List the properties a set of elements to satisfy to become a field. From that show
how a Galois Field and a ring can be differentiated.
23 What is concatenation?
24 What are the reasons to use an inter leaver in a turbo code?
25 Define Turbo Code.
26 What are convolutional codes?
27 What are the differences between block and convolution codes?
28 Explain convolutional encoder with an example.
29 Construct the state diagram for the convolutional encoder given in figure 1.
Figure 1
30 Construct the trellis diagram for the convolutional encoder given in figure 2.
Figure 2
32 Discuss about how a convolutional code is classified into Catastrophic and Non-
catastrophic convolutional code.
List the types of decoding techniques available for convolutional codes. Discuss the
33
merits and demerits of each technique.
34 Explain turbo encoding with a neat block diagram.
35 Explain turbo decoding with a neat block diagram.
36 Consider a convolutional encoder described by its generator polynomial matrix,
defined over GF(2):
1 D D 1 D
G ( D)
D 1 1
(i) Draw the circuit realization of this encoder using shift registers.
Coding Techniques Important Questions
(i) Encode the bit stream: 11, 01, 00, 11, 10, 10, …
37 Consider a convolution encoder described by its generator polynomial matrix,
defined over GF(2):
D 0 1 D2 D D2
G ( D) D 2 0 0 1 D 0
1 0 D2 0 D 2
(i) Draw the circuit realization of this encoder using shift registers. What is
the value of v?
(ii) Is this a catastrophic code? Why?
(iii) Is this code optimal in the sense of the Heller bound on dfree.
38 Consider the binary encoder shown in figure 3.1.
Figure 3.1
(i) Construct the trellis diagram for this encoder.
(ii) Write down the values of k0, n0, v, m and R for this encoder.
(iii) What are the values of d* and dfree for this code?
(iv) Give the generator polynomial matrix for this encoder.
39 Let the generator polynomials of a 1/3 binary convolutional encoder be given by
g1 ( D) D 3 D 2 1
g 2 ( D) D 3 D and
g3 ( D) D 3 1
Encode the bit stream: 01100011110101.
40 Let the generator polynomials of a 1/3 binary convolutional encoder be given by
g1 ( D) D 3 D 2 1
g 2 ( D) D 3 D and
g3 ( D) D 3 1
Decode the received bit stream: 001001101111000110011.
41 What is modified Huffman code?
42 What is Run-length coding?
43 What is arithmetic coding?
Coding Techniques Important Questions
44 Explain in brief, the principles of compression.
45 The LZ algorithm is to be used to compress a text file prior to its transmission. If the
average number of characters per word is 6, and the dictionary used contains 4096
words, derive the average compression ratio that is achieved relative to using 7-bit
ASCII codewords.
46 Explain the meaning of the following terms relating to text compression
algorithms:
(i) Static coding,
(ii) Dynamic/adaptive coding.
47 With the aid of an example, describe the rules that are followed to construct the
Huffman code tree for a transmitted character set.
48 Explain the meaning of the following terms relating to source encoding:
(i) Differential encoding,
(ii) Transform encoding.
49 Explain the meaning of the following terms relating to statistical encoding:
(i) Run-length encoding,
(ii) Statistical encoding.
50 Explain in detail, Adaptive Huffman coding, with the help of an example.
51 Assume the contents of a file that consists of 256 different words – each composed
of alphanumeric characters from the basic ASCII character set – is to be sent over a
network using the LZW algorithm. If the file contents start with the string:
This is easy as it is easy…
Show the entries in the dictionary of the encoder up to this point and the string of
codewords that are sent. Also show how the receiver builds up its own dictionary
and determines the original file contents from this.
52 A series of message is to be transferred between two computers over a PSTN. The
messages comprise just the characters A through H. Analysis has shown that the
probability (relative frequency of occurrence) of each character is as follows:
A and B = 0.25, C and D = 0.14, E,F,G, and H =0.055
(a) Use Shannon’s formula to derive the minimum average number of bits per
character.
(b) Use Huffman coding to derive a codeword set and prove this is the minimum
set by constructing the corresponding Huffman code tree.
(c) Derive the average number of bits per character for your codeword set and
compare this with:
(i) The entropy of the messages (Shannon’s value),
(ii) Fixed-length binary codewords,
(iii) 7-bit ASCII codewords.
Coding Techniques Important Questions
53 Describe the principles of TIFF and its application domains.
54 Explain the meaning of the following terms relating to video compression:
(i) Moving pictures,
(ii) MJPEG
(iii) Motion estimation and compensation.
Include in your explanation why the latter is used.
55 With the aid of a frame sequence diagram, explain the meaning of the term “PB-
frame” and why such frames are sometimes used.
56 Explain the significance of D-frames in video coding?
57 State and explain the encoding procedure used with
(i) the motion vector and
(ii) the prediction error.
58 A digitized video is to be compressed using the MPEG-1 standard. Assuming a frame
sequence of:
IBBPBBPBBPBBI…
and average compression ratios of 10:1 (I), 20:1 (P) and 50:1 (B), derive the average
bit rate that is generated by the encoder for both the NTSC and PAL digitization
formats.
59 Explain the MPEG algorithm for video encoding, with a block diagram.
60 With the aid of a diagram, identify the five main stages associated with the baseline
mode of operation of JPEG and give a brief description of the role of each stage.
61 Explain the basic mode of operation of GIF. Include in your explanation the size of
the color table used, how each pixel value is sent, and how the receiver knows the
image parameters used by this source.
62 Explain the “Motion estimation” and “Motion Compensation” phases of P and B
frame encoding process with diagrams wherever necessary.
62 With the aid of example frame sequences, explain the meaning of the following types
of compressed frame and the reasons for their use:
(i) I-frames,
(ii) P-frames,
(iii) B-frames.
63 In relation to GIF, explain how the LZW coding algorithm can be applied to the
(compressed) image data. Include in your explanation how compression is achieved
and how the receiver interprets the compressed information.