From: Ray Anderson (firstname.lastname@example.org)
Date: Mon Feb 28 2000 - 13:20:02 PST
> Hi Folks:
> I'm in the interconnect group where connectors are our thing. I'm the high
> speed modeling guy. I take those connector spice models and make sure the
> connector will work with our drivers/signal paths/receivers. I have a lot of
> experience with connector models made using TDR measurements. I have
> recently been sent some models that appear somewhat dubious. The vendor
> group that made the measurements says they only use a network analyzer and
> then do the inverse fft and some other tricks to get the time domain. They
> have some way to fit parameters to a model to get a match to their
> measurements. My question is, is the frequency domain method as good as
> using the TDR at various edge rates?
> Thanks in advance for your comments.
> Kai Keskinen
I usually use vector network analyzer (VNA) measurements when making connector
or package models from lab measurements.
We set the sweep bandwidth to be commeasurate with the risetime of the signal
that is going to pass through the device ( .35/RT approximation). So for
say a 500 psec RT signal that would equate to 700 MHz. Then we go a little
higher, say 1 or 1.2 GHz in this case. For a 200 psec signal (1.75GHz
freq content by calculation) I'd scan out to 2 or 2.5 GHZ.
After capturing the S-parameters for the subject device I import them into
an optimizing program such as Agilent ADS or APLAC. Then I define a model
topology based on the physical configuration of the device I'm trying to
model. You need to use engineering judgement in this task to define a
resonable topology. Then we turn the optimizer loose to determine the
element values that are necessary to make the response of the model
approximate the response shown by the measured s-parameters. It is good
to set limits on the element values so that the optimizer doesn't go off
in left-field and create models with unrealistic element values.
I've had pretty good luck using this technique in creating models that
work pretty well up to about 200 psec RT. For faster rise times you need
to use a more complex model (i.e. more elements) than for slower signals.
The adequacy of the model is determined by how well the measured and
simulated responses match up across the frequency band of interest.
Make sure you make your measurements to a frequency higher than you think you
really need as extrapolating on the high frequency end can be dangerous.
Low end extrapolation is usually pretty safe as things tend to be more
well behaved there (but not always).
I've found the optimization algorithms in APLAC to quite often suceed in
some case where the ADS (MDS) algorithms had a hard time converging at
an acceptable solution.
We've got the Tektronix IPA-610 system in the lab which is supposed to
be pretty good for creating models with TDR measurements, but I don't
believe I've ever used it for that purpose.
By the way, the measured s-parameters can also be used to create mathematical
'black-box' models that are realized as Pade' (or related) approximations
(ratio of polynomials).
In these cases you need to be concerned with model passivity and stability as
well making sure you use enough poles and zeroes to adequately approximate
the DUT's response. I won't go into the details here, but the mathematical
black-box model can be a powerful technique for creating measurement based
models as an alternative to element based models, but you need to be real
careful with your assumptions or you can get into trouble.....
**** To unsubscribe from si-list or si-list-digest: send e-mail to email@example.com. In the BODY of message put: UNSUBSCRIBE si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
This archive was generated by hypermail 2b29 : Thu Apr 20 2000 - 11:35:13 PDT