Re: [SI-LIST] : Waveform comparison metrics

Dima Smolyansky ([email protected])
Thu, 29 Jul 1999 17:25:51 -0700

Greg:

One of the reasons that the human eye is such a great processing device is
that it selects which portions of the waveform are important and which are
not. Emulating this process, perhaps subtraction using a weight function is
a better way that direct subtraction. Then again, as Alex pointed out, one
has to deal with any time offset between the measured and the simulated
data.

-Dima
=================
Dima Smolyansky
Time Domain Analysis Systems, Inc.
7465 SW Elmwood St.
Portland, OR 97223
(503) 977-3629
(503) 804-7171 (mobile)
(503) 245-5684 (fax)
www.tdasystems.com
The Interconnect Modeling Company (TM)

-----Original Message-----
From: [email protected] <[email protected]>
To: [email protected] <[email protected]>
Date: Thursday, July 29, 1999 3:14 PM
Subject: Re: [SI-LIST] : Waveform comparison metrics

>Alex,
>
>Funny you should post this today - we were just working on this very thing!
Our
>application is a method to compare lab data with simulation predictions
using
>IBIS or SPICE models, whichever you happen to have. I am part of a
committee
>that wrote a document called the "IBIS Accuracy Specification," which you
can
>find on the IBIS web site under accuracy, if you're interested. We settled
on a
>twist on your first method, subtraction. First you have to interpolate to
put
>the waveforms on a common x-axis. (Scopes don't usually allow you to pick
dx
>while most simulators do.) Then you have to slide the waveforms so that
they
>both cross some threshold at the same time. Finally you take the absolute
value
>of the difference between two data points and average these numbers over
the
>whole set of data points. The method does have its down side. A dc offset
will
>make the correlation look worse than it really is. Likewise, really good
>correlation on one part of the waveform (say, the steady state) can mask
really
>lousy correlation on another part of the waveform.
>
>I'd be interested in hearing more about your correlation coefficient idea.
The
>best solution may be to run 2 or 3 metrics, along with a listing of what
you
>call "basic metrics." Let each metric tell you something different about
the
>correlation. It would be real nice if we had a piece of shareware to do
these
>computations. We're leaning toward having our IC vendors provide us with
this
>kind of data so we can assess model quality. It would be good if everyone
were
>singing from the same hymnal!
>
>Anybody else have other ideas?
>
>Greg Edlund
>Advisory Engineer, Critical Net Analysis
>IBM
>3650 Hwy. 52 N, Dept. HDC
>Rochester, MN 55901
>[email protected]
>
>
>"Levin, Alexander" <[email protected]> on 07/29/99 02:06:22 PM
>
>Please respond to [email protected]
>
>To: "'[email protected]'" <[email protected]>
>cc:
>Subject: [SI-LIST] : Waveform comparison metrics
>
>
>
>
>
>In the course of a design, there are many occasions requiring the
comparison
>of waveforms to determine "sameness". Several tasks requiring waveform
>comparison come to mind: building and checking IBIS models,
>comparing/benchmarking simulator tools, and comparing simulated waveforms
to
>lab measurements. Aside from the basic metrics of rising/falling delay,
>ringback amplitude, overshoot/undershoot, is there a repeatable
>(automatable?) method which can quantify the degree of matchup between two
>waveforms?
>
>Several approaches come to mind, but each carries its own downfalls.
>Waveform subraction: Will provide an estimate of the differences between
>waveforms, but is extremely sensitive to any time offset.
>Correlation coefficient: Analagous to a dot product of the voltage-time
>sample points in each waveform. Doesn't capture absolute DC voltage level
>shifts; r^2 offers no information about type of mismatch.
>FFT: Can capture similarity in edge rates, ringing period, etc. but key
>waveform differences may be masked or lost in high frequency noise seen in
>FFT's of digital waveforms.
>Overlaying and eyeballing: The human eye is an excellent image processing
>device, but the statements "looks good" or "that's a lot of overshoot" are
>often too subjective.
>Apply basic metrics: Measuring rise/fall delay, ringback, over/undershoot,
>ringing period, etc. are typically used. Again, however, the
interpretation
>of the matchup is subjective.
>
>I'm not necessarily looking for a catchall solution, but would be
>interesting in hearing about any novel approaches people are using.
>Waveform overlay will still have its place, but it would be nice to combine
>this with less subjective methods.
>
>Thanks much,
>Alex Levin
>
>
>
>**** To unsubscribe from si-list: send e-mail to
[email protected]. In
>the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP.
si-list
>archives are accessible at http://www.qsl.net/wb6tpu/si-list ****
>
>
>
>
>
>**** To unsubscribe from si-list: send e-mail to
[email protected]. In the BODY of message put: UNSUBSCRIBE
si-list, for more help, put HELP. si-list archives are accessible at
http://www.qsl.net/wb6tpu/si-list ****
>

**** To unsubscribe from si-list: send e-mail to [email protected]. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP. si-list archives are accessible at http://www.qsl.net/wb6tpu/si-list ****