From: Steve Corey ([email protected])
Date: Fri Jun 01 2001 - 20:55:55 PDT
Hi Andy -- what you say is true, and I wasn't very clear in my earlier post.
In the interest of conciseness, I seem to have compromised clarity...
The point I was trying to make was that for a particular simulation run, which
is defined in part by the timesteps at which it is evaluated, or even for any
simulation whose timesteps are all greater than some minimum value (which
includes every possible simulation run), there is a lumped model which behaves
identically to the distributed model at every timestep. If you view the output
waveforms, you can't tell whether the model was "distributed" or "lumped",
since their behavior at each time step is identical. I wasn't trying to imply
that a simulator necessarily explicitly discretizes the distributed model prior
to performing the simulation -- thanks for making that clear.
The reason I brought up that point was to say that, if we consider accuracy
alone, there is no reason why a distributed model is necessary -- a lumped one
is totally sufficient. The proofs hold for models based on convolution/Laplace
methods as well as the totally lossless case. They actually hold for any
linear system -- lumped, TEM, full wave, and more.
However, looking at the accuracy of a model without addressing its
computational efficiency is really just an academic exercise, as I am aware.
As you have pointed out, the reason for using a distributed model is that in
reasonably distributed and highly distributed transmission-line-like
structures, it generally simulates faster than an identically behaving lumped
model would. This is why several companies are making good money marketing
simulators which implement distributed T-Line models.
Steven D. Corey, Ph.D.
Time Domain Analysis Systems, Inc.
"The Interconnect Modeling Company."
email: [email protected]
phone: (503) 246-2272
fax: (503) 246-2282
"Ingraham, Andrew" wrote:
> Steve Corey wrote:
> > Finally, just for the sake of interest, I would mention that from a
> > time-domain simulator's perspective, the concept of a distributed model
> > vs.
> > a lumped model is a very blurry one, since as the simulator chooses its
> > time steps, every model gets effectively discretized into lumps of some
> > sort.
> Not necessarily. The simplest (lossless) transmission line model is just a
> pure time delay. No need to divide it up based on the timestep size.
> Some simulators do the opposite, however: they choose their timestep size
> based on the shortest transmission line found in the netlist.
> Conceptually, even a lossy line model could be implemented as a convolution
> with the impulse response of the line, found by laplace methods, without
> having to section the line into lumped elements; but I think the main
> problem is that this is computationally inefficient.
> **** To unsubscribe from si-list or si-list-digest: send e-mail to
> [email protected] In the BODY of message put: UNSUBSCRIBE
> si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
> si-list archives are accessible at http://www.qsl.net/wb6tpu
**** To unsubscribe from si-list or si-list-digest: send e-mail to
[email protected] In the BODY of message put: UNSUBSCRIBE
si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
This archive was generated by hypermail 2b29 : Thu Jun 21 2001 - 10:12:13 PDT