RE: [SI-LIST] : receiver jitter

About this list Date view Thread view Subject view Author view

From: Roy Leventhal ([email protected])
Date: Tue Jan 18 2000 - 15:02:54 PST


As always, Arpad's remarks are very helpful.

And yes, what you put in the model is very important. But, there is a need to
consider the analysis approach on equal terms. Beyond that, you had better be a
(manufacturing) control freak and very bave (foolhardy) if your designs require
low error (a percent or two) simulations to give you assurance they will work.

Anyway, I include my comments prompted by a similar discussion:

Choosing and Using Models and Simulators

Summary

Below are my responses to a discussion on the SI-List reflector regarding what
models to use and which simulators are more accurate.

In short, it depends. Most modern Signal Integrity simulators are capable of
errors of less than +/- 6% to 8%. This is what I call "accurate within an
engineering approximation." This should be sufficient for good design practices.
The caveats are:

* Glaring errors arise at every turn. Therefore, reality checks in the lab
should be standard practice.

* Assumptions based on previous experience constantly need to be re-validated in
a period of rapid technological change such as we have now. Therefore,
whole-board scans to turn up surprises are probably in order given where
switching speeds are going.

* People don't always follow good top-down design with sufficient safety margins
to manage "accurate within an engineering approximation" risks. Therefore,
exploring a solution space, i.e., allowable variation should be part of the
standard design process.

* Good design and design intent gets short-circuited by Murphy's law gotchas at
every turn. Plus, every design is a compromise between the ideal and the
practical. Therefore, applying, managing and adjusting a system of design
constraints though the design automation tools is critical.

* Establishing trust in simulation is a perquisite to having engineers switch to
using it from their tried-and-true methods (or no methods) of validating their
designs. Therefore, validation and learning the limitations of a tool set should
be part of the standard design process.

* Different models and different simulators have things they do well and unwell.
It's best to know what these things are. Therefore, these things are discussed
below:

Sent by: "CHANG, MARK (HP-SanJose, ex1)" <[email protected]>
To: Roy Leventhal/MW/US/3Com
cc:
Subject: RE: ibis model at Gbit speed

Hi Roy!

This was from the SI posterboard. I was curious to which simulators have better
capabilities.

Thanks!

-Mark

Re: [SI-LIST]: IBIS models at Gbit speeds
Roy Leventhal ([email protected])
Thu, 9 Dec 1999 08:28:29 -0600

All,

Mike Mayer's comment doesn't surprise me. And, everyone who has responded sees
it through his or her own filter. Some have made comments that relate Mike's
story to simulator capabilities, some relate to how far you can push the IBIS
model.

My perception of Mike's comment simply picks up on the low amount of effort and
expense that the model supplier is apparently willing to put into their product.
Apparently, they should be capable of better. This slipshod work is a lamentable
human trait that is very common. Before you think me too pure, I have slipped
into it on occasion myself.

My problem is how very common such slipshod work is in the models I have seen,
and not just IBIS. It's rare (but, not unheard of) that I have seen evidence
that my model suppliers take pride in delighting the customer.

Regards,

Roy

Mark,

I'll expand a bit on what I had in mind and try and answer your question(s)
regarding high frequency simulations:

a. Regarding Simulators

I have direct experience with Cadence's SigXP/SpecctraQuest for nearly 4 years
now. I believe their simulation accuracy has improved (with a notable exception
- see 2D Field Solutions below:) and is capable of +/- 1%-2% error on
interconnect and +/- 6%-8% (perhaps better if they really understood the
controls needed on verification studies) on interconnect + IBIS model + SPICE
elements combined as established by some of their own verification studies. I
believe this is fully competitive with XTK, HyperLynx or anything else out there
within the limits of application of the analysis approach.

I believe (by reputation) that XTK, HyperLynx, HP-EEsof and other major
simulators are also capable of the same.

b. Regarding Analysis Methods

Currently there are 3 popular analysis approaches in use based on implementing 3
ways of modeling active devices - IBIS, SPICE and Scattering Parameters. There
are also 3 ways of modeling the interconnect - closed form algorithms, 2D and 3D
extraction of RLGC matrices and parasitics.

IBIS
IBIS is behavioral, thus inherently fast, and thus inherently VERY accurate
close to the conditions under which it was MEASURED. Since most IBIS models are
simulated (not measured) from SPICE they are totally dependent on the quality of
the SPICE model. They can only be as good as the SPICE models and there is
usually some information loss in the conversion process. Also, IBIS is
inherently large signal, wideband and suited to switching simulations. IBIS
models are limited in their range of accuracy - they do not model the effects of
temperature, voltage and wide ranging bias conditions. The model (actually a
data-exchange protocol) provides for modeling the corners on process, operating
voltage and temperature through min-max data. But, such information is often not
provided or is suspect.

SPICE
SPICE models are actually a computer implementation of Gummel-Poon, Ebers-Moll,
Shichman-Hodges, etc., etc., Device-Physics based modeling of active devices.
These models are physical representations of the inner workings of a device in
terms of dependent voltage/current generators and parasitic elements such as
diffusion capacitance. The range of applicability can be very large if the
underlying device physics equations are taken into account for computing new
values of the parameters as bias conditions shift. The equations must be kept
up-to-date as technology changes. For instance, in a discrete BJT junction
sidewall capacitance is treated as negligible. But, in a deep sub-micron CMOS IC
it dominates. The effects of temperature, voltage, large signal swings and wide
ranging bias conditions can be modeled successfully. SPICE has had a much longer
history than IBIS. People trust it more and more of the kinks have been worked
out.

That said, SPICE is usually much slower than IBIS and is not a good place to
start a Signal Integrity simulation scan of a board. A full-up BJT or BISIM3
model of a MOSFET can easily contain 40 or more parameters / circuit elements
for a single IO cell. If you have, for instance, a 3000-net board and have to
quickly scan it and find out where your critical nets are, well come back a year
from now! Besides, SPICE is an input-to-output model primarily and in Signal
Integrity the simulation task is simplified to computing reflections at the
input (or output) terminals of the IO cell.

In top-down design critical nets are often the ones you missed because your
design rules are out of date and your assumptions about what CAD would do with
your board were - based on assumptions that are no longer valid. A common
assumption is that the edge rate of an IO cell bears some rational relation to
the clock speed being used. Not a good assumption when your silicon supplier
applies die shrink, doesn't tell you and gives you an IC running at 4 MHz with a
400 pS edge rate! This is what I call creating unnecessary problems.

One objective of good top-down design is to avoid the need for highly detailed,
highly accurate models by designing in as much performance factor margin as
possible from the get-go. You can always switch from an IBIS model to a SPICE
model, if, after the application of these principles you are left with a (small)
number of critical nets pushing the state-of-the-art.

Also, I've seen lots of bad IBIS models where the obvious cause was a bad SPICE
model that was converted. For instance, where you get clamp currents of 1E+33 at
.75 volts over the rail because the SPICE diode clamp resistance was
approximated to zero.

Scattering Parameters
Scattering Parameters are sets of reflection/transmission coefficients of a
device, circuit or interconnect. Thus, they are 4-element 2-Port (or 2N-element
N-Port as appropriate) behavioral models suitable to be accurately measured and
characterized at very high frequencies. The output port is attached to a matched
load rather than attempting to create an ideal broadband short or open at
microwave (radiating) frequencies. Scattering parameter black boxes are then
embedded in a circuit and effects of mismatched loading are easily modeled. They
have the same advantages and disadvantages of any behavioral model. To
characterize large signal behavior, wide bandwidth frequency content,
temperature variations, etc., you need many sets of these parameters for a given
device. But, to model driver-receiver combinations over a net you need only the
reflection coefficients at the driver and receiver and not their transmission
coefficients or their coefficients at the other ports. For the interconnect you
need the full set if you are going to model it with Scattering Parameters.

There is considerable belief that Scattering Parameters will become a necessary
evil as edge rate frequency content pushes into the 5 GHz to 10 GHz range and
beyond. These frequencies are beginning to enter the microwave range. These
frequencies are significant at edge rates of 200 pS and faster which we are
beginning to see. We will definitely see these frequencies for advanced
technologies such as RAMBUS. The only tool set I would trust to analyze this
technology at this point is the one from HP-EEsof. Any others I would have to
validate.

Closed Form Algorithms of Interconnect
In other words, equations, or another type of behavioral model. These can be
accurate to within a percent or two if you are close to the conditions under
which they were developed. But, be prepared for errors of +/- 10% just on Zo in
many instances depending on which equation is being used and what the original
conditions were. These equations were developed empirically starting in the
early 1950s by curve fitting before computing power was what it now is. These
equations are still useful for providing insight into the general behavior of
interconnect that is tougher to get from sets of computer simulations.

2D Field Solution of Interconnect
Maxwell's equations are solved in a 2D plane perpendicular to the directions of
propagation down the etch. RLGC matrices are extracted from the solution. Then,
the velocity of propagation along the etch is used. Everything is fine so long
as corners, vias, transitions, and other 3D discontinuity effects are not of
major concern. 2D is much faster than solving Maxwell's equations in 3D. Up to
microwave frequencies (where structure size inevitably looks comparable to
wavelength instead of wavelength being much larger) accuracies are typically 1%
- 2%. G is usually approximated to zero and R often is unless you are
accounting for skin effect.

3D Field Solution of Interconnect
Use where the effects of bends, etc., are important. As in microwave frequencies
where structure size inevitably looks comparable to wavelength instead of the
wavelength being much larger. Also, as frequency content pushes into the 100 GHz
range (i.e. where dielectric waveguide, fiber optics and such phenomena live)
you must start accounting for dielectric losses and dispersion. In the microwave
region accuracies are typically 1% - 2% if skin effect is included.

Glaring Errors
Certain assumptions are made prior to applying analysis methods. One recent
example I have seen is that we assume that a piece of etch does not couple onto
itself ("self-crosstalk"). In the case of a tightly coupled serpentine delay
line that is a false assumption. The resulting simulation errors as compared
with lab measurements are easily 50% error or more. All simulators I have tried
are incapable of modeling such a structure correctly. I have yet to see what
HP-EEsof can do and I'm hoping they can get it right.

Incidentally, this case came to light in the lab when we assumed that CAD would
follow the rules about minimum etch spacing on the meander line loops that we
sent them. Post-route simulation doesn't catch the mistake.

Erring on the side of excessive caution results in over-constrained design. A
recent example of how this might arise occurred when I initially used an IBIS
model that contained only slew rate (dV/dt) information. The device was actually
a soft turn-on/turn-off with active clamp technology. Simulating with only slew
rate rather than V-T curve information will give excessive ringing and noise as
opposed to reality. The glaring error was insufficient effort on model
construction by the model provider.

c. Regarding Models

All models assume that the process being modeled was correctly
measured/simulated when the work was done and that the process has not changed
significantly since. Often this is a bad assumption as process engineers
"optimize," do die shrink and other strange things. Usually, the last (if ever)
people to find out this has happened are the ones who have to provide modeling
information to the customer.

We also usually assume that the board fab will build the board as the PCB
Designer directs.

>>> Surprise! <<<

d. Regarding Gross Errors vs. Accuracy and Precision

I advocate reality checks in the lab to keep the above Murphy's law effects
manageable. Most of the above examples result in glaring errors obvious to the
most casual observer.

Subtle errors that result in lots of expense are either the result of poor
design or poor risk management. If you have to operate at state-of-the-art where
models are poorly defined, processes are new, performance is being pushed, etc.,
then act accordingly.

Poor design is designing a circuit that is highly sensitive to variation and has
poor or non-existent design margins when the application of good, established
design rules (bypassing, max parallelism, termination, etc.) avoids the problem.

Poor risk management results when you: a.) Know you have to push things (cost,
size, speed, etc.); b.) Fail to explore the design solution space (statistical
design, desensitization, etc.), and; c.) Fail to take the steps necessary to
alleviate the risk. I.e., if you must have a slower rise time part - - - take
action! Or pray the company will survive your misplaced faith.

e. Regarding the Need for Accuracy and Precision vs. Good Design

Belief in the high accuracy and precision of your simulation tool and model
library needs to be established as part of a trust building exercise. But, then
you should design in a fashion that avoids a need for high accuracy to guarantee
performance. I.e., within +/- 2%. You will not often achieve this in practice.

f. Regarding Verification of Results

You (or your software company and device suppliers) will need to run
verification studies from time to time. Because, you need to establish,
re-establish and extend the range of trust in your simulator and models. For
such purposes I advocate a "statistical" verification approach. Here, an
envelope of predicted outcomes is compared with an envelope of measured outcomes
for verification over a set of units representative of the process population.

Attempting one-one correlations to a low error over a meaningful spread of a
process at high frequencies is extremely precise work best undertaken only by
the truly anal-retentive and technically sophisticated.

Hope this answers some of your questions.

Best Regards,

Roy Leventhal

**** To unsubscribe from si-list: send e-mail to [email protected]. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
****


About this list Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Thu Apr 20 2000 - 11:34:43 PDT