Test fixtures for vector network analyser measurements
18 December 2017
Today’s rapidly evolving technology is characterised by higher operating frequencies and wider bandwidths, along with dramatic reductions in the size and weight of components and sub-assemblies.
Together, these trends present new challenges for the test and measurement equipment required for the precise measurement and characterisation of devices incorporating these miniature parts.
The situation is compounded by the fact that in many cases there is a need to test “non-connectorised” devices, which means that there is no possibility of easily using commercial measurement instruments incorporating standard coaxial connectors as the key interfaces to the device under test (DUT).
In such cases, the device under test (on a substrate or wafer base) needs to be connected to the measurement instrument via some sort of interface providing both mechanical and electrical connection. This interface generally takes the form of a probe-based structure known as a test fixture, ranging from a fixed-base connection to a “universal” approach providing flexibility, efficiency and quality of measurements.
The extensive characterisation and high-quality measurements required for most of these DUTs generally call for the use of a vector network analyser (VNA), which provide measurement results in a form of S-parameter vector metrics for magnitude and phase.
The fixture shown in Fig.1 allows accurate repeatable transitions from coaxial to microstrip or coaxial to coplanar waveguide (CPW), providing substrate measurement capability for components and device designs.
A further benefit of this type of test fixture is that it offers a degree of in-situ maintenance, providing extended flexibility for the engineer. If the fixture is accidently damaged, for example, it is possible to replace the main parts in place, saving time and money by not stopping in the middle of the measurement suite.
An ideal situation for a test fixture through providing a corresponding interface between a DUT and a measurement instrument (VNA) would be a fully “transparent” connection with zero insertion loss and infinite return loss (no mismatch), flat and known frequency response with linear phase, and no leakage between ports (infinite isolation), which is, of course, not possible. Many factors contribute to the quality of measurements – as there is measurement system involved, all of its parts (VNA with its components, cables, adapters and the fixtures) will be adding errors, boosting up the measurement uncertainty. In addition, DUT’s performance also contributes to the overall measurement accuracy. However, it is possible to remove most of the measurement errors using VNA’s vector error correction techniques as part of the calibration process, which relies on well-defined high-quality calibration standards as reference items.
Random and systematic errors
In most RF and millimetre-wave VNA measurements, error terms are classified as either random or systematic.
Random errors are not predictable and are therefore not correctable by the calibration process. The main contributors to random errors are connector repeatability, cable stability, environmental changes (not including drift effects), frequency repeatability, and noise. Good measurement practice is critical to reducing random errors.
Systematic errors, on the other hand, include such factors as directivity, source and load match, isolation, reflection and transmission tracking. Although they represent the most significant sources of measurement uncertainty, they can be drastically reduced by the calibration process (and subsequent error-correction). Some residual effects are not corrected because calibration standards are not perfectly defined and the error models used to describe the measurement setup are not perfect. The residual errors can be used to compute the total measurement uncertainties and are expressed in a VNA’s technical specifications for a variety of calibration algorithms. This helps to determine the level of measurement accuracy available from an instrument under a given set of conditions.
The main issue for a measurement system that utilises a test fixture is that those uncertainty terms mentioned above are defined for the VNA’s standard coaxial calibration kits, meaning that error effects are removed only up to the coaxial connectors of the test cables used.
Several main practices can be utilised to remove the contribution of a test fixture from the DUT’s measurement results. Each of them are applicable to the corresponding use-cases depending mostly on the required measurement accuracy. The performance of the test fixture is also directly related to the performance of the DUT, and dictates what level of calibration is required to meet the measurement uncertainty criteria.
De-embedding is a VNA software tool which is used to “mathematically” remove the effect of test fixtures from the measurement results. The test fixture (or “network” in this case) must be accurately measured beforehand with an S-parameter data file or a simulated model of the fixture in place. Calibration is made using coaxial standards in this case.
Overall measurement accuracy in general will be mainly determined by the quality of the S-parameter data of the fixture (measured or modelled) which in complex scenarios is quite difficult to obtain. If the actual performance of the test fixture does not match modelled or measured data, then errors occur. De-embedding is also barely applicable for a low loss DUTs and does not account for mismatch error terms. In such cases, calibration inside the fixture CAN provide a more accurate representation of the DUT’s S-parameters.
There are two basic methods of two-port in-fixture calibration available: OSL(T) for open-short-load (through), and LRL/LRM for line-reflect-line/line-reflect-match. LRM calibration is a variation of LRL calibration where a match (termination) is used in place of the second line. There are also some modifications like ALRM (advanced LRM): an Anritsu calibration technique implemented in the VectorStar calibration menus, which uses different load models for each port and two reflect standards. Table 1 is a list of typical calibration algorithms available showing those appropriate for “in-fixture” applications.
Measurement accuracy will largely be dependent upon the calibration standards – how accurate the standards are defined – and the number of error terms that can be corrected will depend on the algorithm and calibration type selected. For the two-port device to remove all 12 terms (mentioned earlier systematic errors: directivity, source match, reflection tracking, load match, transmission tracking, and isolation both forward and reverse direction (for the signal flow) in total of 12 terms) the Full 2-Port calibration and error correction will be required. This will lead to maximum measurement accuracy but requires more calibration standards.
LRL calibrations give the best accuracy but have limited bandwidth. The less accurate OSL and LRM calibrations cover wide frequency ranges and can calibrate down to lower frequencies. LRM calibrations results in a better source match than OSL calibration. The best wideband calibration can be obtained by combining LRL and LRM calibrations. LRL/LRM calibration is also preferable in microstrip environments as the corresponding standards are more easily fabricated than those for the OSL. OSL standards also require accurate characterisation for the quality measurements. Fig. 4 shows some examples of “in-fixture” OSL standards.
It is very helpful if the “in-fixture” calibration set-up also include verification components to help an engineer to qualify the measurement system. Moreover, it is important to have specifications for the residual error terms that can be obtained using the particular system setup (consisting of the VNA, the test fixture and the “in-fixture” calibration kit). This gives the engineer confidence in the measurement accuracy and the results obtained.
An engineer can also use a dedicated “uncertainty calculator” such as Anritsu’s “Exact Uncertainty” to obtain further accuracy level confidence.
Challenges in the proliferation of new RF/mmW designs will require accurate and high-quality device characterisation. Having the right tools and an understanding of the importance of proper calibration, along with the algorithms discussed in the article, will ensure best practices and improved quality of the component/device design.
Contact Details and Archive...