RE: [SI-LIST] : Waveform comparison metrics

tomda ([email protected])
Fri, 30 Jul 1999 14:01:24 -0700

Greg

If a vendor gave you a part that was described at "typical" and the
measurements of the typical part and the simulation with the typical model
didn't have a 100% correlation (you define the method) would you say it was
a bad model? Would you say it was a bad model if the typical was bounded by
the min/max? Would you not buy that vendor's parts?

I may be showing my age and the poor process control a prior employer had
but I remember characterizations that included which boat location a wafer
came from and what location on the wafer a die had been cut. Processes are
much better now but still "typical" may have a lot of spread.

I'm curious, what % correlation are you expecting? I wonder what industry
expects?

Tom Dagostino

-----Original Message-----
From: [email protected] [SMTP:[email protected]]
Sent: Friday, July 30, 1999 11:20 AM
To: [email protected]
Subject: RE: [SI-LIST] : Waveform comparison metrics

Tom,

Your point about processing conditions is well-taken. Junction temperature
is
another variable as well. The usefulness of a non-envelope correlation
metric
lies in one's ability to procure samples that are known to come from a
typical
lot. Some vendors are even able to deliver known fast and slow samples
early in
their process development. A fab line will keep track of parametric data
and
should have a pretty good idea of whether or not a given lot is typical
silicon.
The trick is trying to get them to pick samples on the day that things look
typical! I've had vendors who are willing to trace the processing
conditions of
a given sample, but less success the other way round.

Greg Edlund
Advisory Engineer, Critical Net Analysis
IBM
3650 Hwy. 52 N, Dept. HDC
Rochester, MN 55901
[email protected]

tomda <[email protected]> on 07/29/99 06:12:04 PM

Please respond to [email protected]

To: "'[email protected]'" <[email protected]>
cc:
Subject: RE: [SI-LIST] : Waveform comparison metrics

I've been struggling this problem for a long time. I don't think trying to
correlate a set of measurements to the output of a SPICE file is going to
get you anywhere except frustrated. SPICE typically gives you the
"minimum" the "typical" and the "maximum" characteristics of a buffer.
When using the SPICE to IBIS shareware you get an IBIS model that has
these three sets of data. This is how most models are created.

Taking a part and simulating it in a circuit will get you 3 sets of outputs
corresponding to the typ/min/max that was in the model.

If you then take a part off the shelf and measure it in the circuit used
above you will get an answer that should fall within the window defined by
the min/max models used in simulation. Trying to do some kind of
correlation coefficient of an unknown part with any of the three
typ/min/max simulations is guaranteed to give answer that will not
correlate but may in fact be correct.

Tom Dagostino

-----Original Message-----
From: [email protected] [SMTP:[email protected]]
Sent: Thursday, July 29, 1999 2:27 PM
To: [email protected]
Subject: Re: [SI-LIST] : Waveform comparison metrics

Alex,

Funny you should post this today - we were just working on this very thing!
Our
application is a method to compare lab data with simulation predictions
using
IBIS or SPICE models, whichever you happen to have. I am part of a
committee
that wrote a document called the "IBIS Accuracy Specification," which you
can
find on the IBIS web site under accuracy, if you're interested. We settled
on a
twist on your first method, subtraction. First you have to interpolate to
put
the waveforms on a common x-axis. (Scopes don't usually allow you to pick
dx
while most simulators do.) Then you have to slide the waveforms so that
they
both cross some threshold at the same time. Finally you take the absolute
value
of the difference between two data points and average these numbers over
the
whole set of data points. The method does have its down side. A dc offset
will
make the correlation look worse than it really is. Likewise, really good
correlation on one part of the waveform (say, the steady state) can mask
really
lousy correlation on another part of the waveform.

I'd be interested in hearing more about your correlation coefficient idea.
The
best solution may be to run 2 or 3 metrics, along with a listing of what
you
call "basic metrics." Let each metric tell you something different about
the
correlation. It would be real nice if we had a piece of shareware to do
these
computations. We're leaning toward having our IC vendors provide us with
this
kind of data so we can assess model quality. It would be good if everyone
were
singing from the same hymnal!

Anybody else have other ideas?

Greg Edlund
Advisory Engineer, Critical Net Analysis
IBM
3650 Hwy. 52 N, Dept. HDC
Rochester, MN 55901
[email protected]

"Levin, Alexander" <[email protected]> on 07/29/99 02:06:22 PM

Please respond to [email protected]

To: "'[email protected]'" <[email protected]>
cc:
Subject: [SI-LIST] : Waveform comparison metrics

In the course of a design, there are many occasions requiring the
comparison
of waveforms to determine "sameness". Several tasks requiring waveform
comparison come to mind: building and checking IBIS models,
comparing/benchmarking simulator tools, and comparing simulated waveforms
to
lab measurements. Aside from the basic metrics of rising/falling delay,
ringback amplitude, overshoot/undershoot, is there a repeatable
(automatable?) method which can quantify the degree of matchup between two
waveforms?

Several approaches come to mind, but each carries its own downfalls.
Waveform subraction: Will provide an estimate of the differences between
waveforms, but is extremely sensitive to any time offset.
Correlation coefficient: Analagous to a dot product of the voltage-time
sample points in each waveform. Doesn't capture absolute DC voltage level
shifts; r^2 offers no information about type of mismatch.
FFT: Can capture similarity in edge rates, ringing period, etc. but key
waveform differences may be masked or lost in high frequency noise seen in
FFT's of digital waveforms.
Overlaying and eyeballing: The human eye is an excellent image processing
device, but the statements "looks good" or "that's a lot of overshoot" are
often too subjective.
Apply basic metrics: Measuring rise/fall delay, ringback, over/undershoot,
ringing period, etc. are typically used. Again, however, the
interpretation
of the matchup is subjective.

I'm not necessarily looking for a catchall solution, but would be
interesting in hearing about any novel approaches people are using.
Waveform overlay will still have its place, but it would be nice to combine
this with less subjective methods.

Thanks much,
Alex Levin

**** To unsubscribe from si-list: send e-mail to
[email protected]. In
the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP.
si-list
archives are accessible at http://www.qsl.net/wb6tpu/si-list ****

**** To unsubscribe from si-list: send e-mail to
[email protected]. In the BODY of message put: UNSUBSCRIBE
si-list, for more help, put HELP. si-list archives are accessible at
http://www.qsl.net/wb6tpu/si-list ****

**** To unsubscribe from si-list: send e-mail to
[email protected]. In
the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP.
si-list
archives are accessible at http://www.qsl.net/wb6tpu/si-list ****

**** To unsubscribe from si-list: send e-mail to
[email protected]. In the BODY of message put: UNSUBSCRIBE
si-list, for more help, put HELP. si-list archives are accessible at
http://www.qsl.net/wb6tpu/si-list ****

**** To unsubscribe from si-list: send e-mail to [email protected]. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP. si-list archives are accessible at http://www.qsl.net/wb6tpu/si-list ****