Quantcast
Channel: Why does QAM use a grid-like distribution versus a more efficient spacing? - Electrical Engineering Stack Exchange
Viewing all articles
Browse latest Browse all 4

Answer by asndre for Why does QAM use a grid-like distribution versus a more efficient spacing?

$
0
0

I see your question is explicitly about QAM, but implicitly is about what is called set partitioning and around it. Below i'll try to explain in detail.

0. QAM itself as a digital-data transmission technique

QAM operates on (modulates) a harmonic signal (the carrier). Given a timescale (or simply a time, t), we can represent a harmonic signal in the given timescale as a total of two orthogonal signals -- sine and cosine, of the same frequency (let's label it w = 2 * pi * f) -- of two amplitudes, let's call them A and B, respectively. I.e.,

harmonic(t) = A * sine(w * t) + B * cosine(w * t)

From this, we see that each orthogonal part can be modulated independently. I.e.,

modulated_harmonic(t) = A(t) * sine(w * t) + B(t) * cosine(w * t)

If we set A(t) = Q[t] and B(t) = I(t), where X[t] means a variable whose value changes discretely both in level and over time, this results in QAM(-modulated harmonic signaling):

QAM(t) = Q[t] * sin(w * t) + I[t] * cos(w * t)

Assuming that X[t] is generated by a DAC (digital-to-analog conversion) circuitry, we have that:

QAM(t) = DAC_Q[t] * sin(w * t) + DAQ_I[t] * cos(w * t)

From here is the answer for the first part of your title question:

Why does QAM use a grid-like distribution ... ?

As we can see, it is because a grid-like distribution is natural for QAM. Moreover, a regular, 2-D, square grid is the most natural for QAM, because it is an obvious way to use two similar DACs in the design.

In your initial figure two DACs of the same 2-bit construction each are assumed. Each drives it own orthogonal "sub-carrier" (Q or I) independently in content but synchronously in time, that gives a composite modulation of 4-bit a time, over the "total" carrier.

If we analyze only the discrete values (levels) of the Q and I DAC outputs, we deal with what is called the (initial) constellation (an image of the DAC-driven 2-D grid) of our QAM case.

Please note here that QAM itself gives us only a constellation, i.e., a set of abstract points in some abstract 2-D space. No codes (binary or other) are assigned (predefined) by the QAM itself.

Also, the QAM-specific part ends here, and the rest of my answer (and your implicit question) is applicable to any modulation that can be operated (abstractly modeled) as a constellation.

0/1. Restricting the initial constellation

Since we have an initial constellation, we can restrict on it, depending on our goals. It is an intermediate step in the coding design flow. I can give the following definition for this step: we eliminate the points which are never used in operation.

In literature, there is no consensus (in a form of attention) on this step is explicit or not, and often the initial constellation of a modulation scheme is already given restricted, i.e., with some number of its points excluded, before it will be partitioned. But in our case, it is better to explicitly demarcate it.

Please note here too, that still no codes are assigned after this step.

1. Partitioned constellation

Invented by Gottfried Ungerboeck as the basis of his TCM, set partitioning is a technique to partition a given initial (already restricted or not) constellation into a number of non-intersecting sets (also called cosets) of points. All such the points are used in operation.

At this level of abstraction, i.e., when we have a constellation, the most common metric used to qualify the points is the (square) distance, which is a geometric one, natural since a constellation is an geometric representation (reflection, model) of the signal.

During set partitioning, two (SQ)Ds are used: free minimal (SQ)D as (the square of) the Euclidean distance between two closest points in the initial constellation, and (simply) minimal (SQ)D as the same distance between to closest points in the partitioned cosets. Squares are preferred in comparison because give integer numbers.

The goal of partitioning is to provide, at least, M(SQ)D > free M(SQ)D. The purpose of set partitioning is to enable soft decoding-based FEC/FED (forward error correction/detection) on the receiving side, e.g., in a form of Viterbi algorithm.

In your initial figure, free MD is 1/sqrt(10), free MSQD=1/10. And as i can see, your need more :-)

And what do you propose?

2. Your proposition

Assuming QAM, you propose restrict on its initial constellation and next partition it (if i understand your coloring correctly, let's take it so), both to increase free M(SQ)D and next in-coset M(SQ)D, respectively. But your way seems unnatural because gives two different DACs, one is 6 levels (about one bit, even not a power of two) more in resolution than the other:

Non-similar DACs

First, this leads for two irregular circuits of the same purpose to occur in the design. It's not optimal and good.

Second, let we'll be fair, your reinvent the wheel here, and the most close, practiced case here is the DSQ constellation design, e.g., DSQ-128 used in 10GBASE-T:

2D-PAM12 vs. DSQ-128

So, here the answer for the second part of your title question:

... versus a more efficient spacing?

There are many ways to increase the free M(SQ)D of an initial constellation, be it 2-D square grid (like in QAM) or other (e.g. 4-D hyper-cubic grid like it is modeled in 1000BASE-T), (the most used) one of such is set partitioning giving a higher M(SQ)D between points in the resulted cosets.

An initial constellation is rarely used alone. The general way is:

Step 0/1. restrict the initial constellation

Step 1. partition the (restricted) constellation

Step 2. assign each operable point in each coset a code (binary or another)

Gray codes and/or other coding techniques are applied only on step 2 and contributes into the whole coding gain of the design. Today, an initial constellation (= plain modulation) alone is not considered in the context as the stand-alone subject of an effective solution, only as the ground for.

For example, PCI Express 6.0 uses the following coding scheme: RS-FEC over Gray over PAM-4.

1000BASE-T implements TCM over 8x2 cosets over 4D-PAM5 (at PCS, over 4D-PAM17, at PMA).

10GBASE-T implements LDPC-backed FEC over 2x (Gray + Pseud-Gray over DSQ-128 over 2D-[PAM16 + THP]), where THP = Tomlinson-Harashima Precoding.

(Of course, above i mention schemes that work over PAM, not QAM, but as i stated earlier (see sections 0 and 0/1), since we have a constellation (as an abstract, geometric representation of the underlying signaling and/or modulation), we can use the universal, modulation-independent principles of modeling.)

As Marcus Müller stated in his answer, (as an integral part of the whole coding design flow) constellation shaping is a science of its own. The above examples show this clean.

On other your sub-questions:

Is it too complex to decode?

Maybe it doesn't offer any measurable benefit?

Typically, a decoder is much more complex than the respective encoder. The complexity of encoding for 1000BASE-T and 10GBASE-T your can see in the above mentioned documents.

The measurable benefit is the coding gain of a coding scheme. As shown above, such a gain is a total of many but tightly-matched items. In the simplest modern case, it is resulted from well-matched binary FEC coding + bit coding + line coding (modulation).

At this point, i think the rest your sub-questions cannot have meaningful answers.

Additional notes

If you are interesting in this theme, i recommend you to begin with the following papers:

[1]. G. Ungerboeck, Trellis-coded modulation with redundant signal sets, Part I: Introduction

[2]. G. Ungerboeck, Trellis-coded modulation with redundant signal sets, Part II: State of the art

[3]. again G. Ungerboeck, 10GBASE-T Coding and Modulation: 128-DSQ + LDPC

[4]. now not Ungerboeck, but Jaime E. Kardontchik, 4D ENCODING IN LEVEL-ONE'S PROPOSAL FOR 1000BASE-T

Moreover, you can meet how the real designers designed real constellation-based coding schemes, reading the materials documenting the respective IEEE 802.3 standardization processes:

[5]. Public area of the 1000BASE-T task force

[6]. Public area of the 10GBASE-T task force and its preceding studying group

Also, because Marcus Müller mentioned some signal power-related issues in his answer, and because i see the tag "Theory" under your question, i recommend you to read something like this:

[7]. "A collection of the data coding means and event coding means multiplexed over and inside the 1000BASE-T PMA sublayer" (abstract at arXiv)

where power balancing of the physical signal is one of the tasks of the design flow, that results in schemes like a stellarial constellation over the 4D-PAM17 grid:

stellarial constellation


Viewing all articles
Browse latest Browse all 4

Latest Images

Trending Articles





Latest Images