Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 31050

Article: 31050
Subject: Re: Altera Consultant
From: Gary Cook <gc@sonyoxford.co.uk>
Date: Thu, 10 May 2001 09:37:42 +0100
Links: << >>  << T >>  << A >>
Pat,

I'm a contractor (currently working at Sony) that does fpga consultancy and
design work as a sideline. I'm happy to take a look at your requirements if
you like.

Regards,

Gary Cook.


Pat wrote:

> Hi,
>
> I have an old ISA board with an Altera FPGA configured as a frequency
> counter. I would like to extend the functionality of the board but have no
> FPGA experience. I do however have the original code (MAXPLUS II). Can
> anyone recommend a consultant?
>
> Philip.


Article: 31051
Subject: Re: Synplicity/Quicklogic choosing high drive input
From: Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de>
Date: 10 May 2001 10:20:51 GMT
Links: << >>  << T >>  << A >>
Ken McElvain <ken@synplicity.com> wrote:
: input foo /* synthesis qln_padtype="normal" */;
: See the help file for examples.

:> 
:> Can anybody give a hint how to stop synplicity to choose a dedicated
:> input pin for an input with high load?

Hallo Ken,

thanks for your help. Probably I missed to tell you the information that I
run Synpicity delivered with Quickworks (V8.2 plus the 8.22 patch), which
identifies itself as "Synplify-Lite 5.1.5", which substantially lags the
present Synlify release. Local quicklogic support tried to be helpfull, but
only gave the hint to rewrite my verilog code to include explicit
buffers. However I don't feel satisfied with that work-around.

I tried your solution in two places for a single input and for a vector of
inputs, to be sure that the vector doesn't cause problems (line numbers
given for reference, not included in the files)

       1 `define VERSION "K-DELAY 0.00-1"
       2 
       3 module camacdelay8(/*AUTOARG*/
       4    // Outputs
       5    Q_N, X_N, WRITE_EN_N, R_N, 
       6    // Inouts
       7    DOUT0, DOUT1, DOUT2, DOUT3, DOUT4, DOUT5, DOUT6, DOUT7, 
       8    // Inputs
       9    RESET_N, S1_N, S2_N, N_N, I_N, F_N, A_N, W_N
      10    );
      11    input                RESET_N,S1_N,S2_N,N_N;
      12    input                I_N/* synthesis ql_padtype="normal" */;
      13    input [4:0]          F_N;
      14    input [3:0]          A_N/* synthesis ql_padtype="normal" */;


However Synplify ignores these hints and gives following report in the
.srr file:

$ Start of Compile
#Thu May 10 11:11:35 2001

Synplify Verilog Compiler, version 5.1.5, built Jul 20 1999
Copyright (C) 1994-1999, Synplicity Inc.  All Rights Reserved

@I::"c:\cae\pasic\spde\data\macros.v"
@I::"...\camacdelay8.v"
@W:"...\camacdelay8.v":12:27:12:36|ignoring property ql_padtype
@W:"...\camacdelay8.v":14:33:14:42|ignoring property ql_padtype
Verilog syntax check successful!

Pathnames truncated for clarity.

The .qds file still instanciates a input only pad for the :

gate A_Nz_p[2] master Q_HINPAD cell IO108 end

Any hints why the properties are ignored?

Thanks again
-- 
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Article: 31052
Subject: 32 bit limit on integers
From: Alan Glynne Jones <alan.glynne-jones@philips.com>
Date: Thu, 10 May 2001 11:48:23 +0100
Links: << >>  << T >>  << A >>
For the purposes of creating a random number generator in a testbench I
want to use integers with ranges which exceed 32 bits.  Is there a way
to do this?

Alan

Article: 31053
Subject: Re: 32 bit limit on integers
From: Allan Herriman <allan_herriman.hates.spam@agilent.com>
Date: Thu, 10 May 2001 21:07:25 +1000
Links: << >>  << T >>  << A >>
Alan Glynne Jones wrote:
> 
> For the purposes of creating a random number generator in a testbench I
> want to use integers with ranges which exceed 32 bits.  Is there a way
> to do this?

Hi Alan,

1.  You can use the unsigned or signed types from ieee.numeric_std (or
std_logic_arith if you must), as they work for vector lengths > 32 bits.

2.  You might find a simulator running on a 64 bit processor that
supports 64 bit integers.  VHDL defines a minimum range (which is just
under 32 bits), but it doesn't define a maximum range.  You might get
lucky, but it won't be portable.

3.  You can define a record containing two integers, then create a
package that defines (& overloads) all the appropriate maths operators
for this new "bignum" type, then use it in place of regular integers.
If you needed completely arbitrary precision, you could use a string to
represent the number.  (See the Perl arbitrary precision arithmetic
packages "bigint" and "bigfloat" at CPAN for example.)

Regards,
Allan.

Article: 31054
Subject: Waveforms painting
From: Marek Ponca <marek.ponca@et.stud.tu-ilmenau.de>
Date: Thu, 10 May 2001 13:14:44 +0200
Links: << >>  << T >>  << A >>
Do you know any free tool for painting digital waveforms ?
Not for testbench generation, only for documentation.

Marek

Article: 31055
Subject: Re: Shannon Capacity - An Apology
From: nemo@dtgnet.com (Nemo)
Date: Thu, 10 May 2001 12:41:04 GMT
Links: << >>  << T >>  << A >>
My in-line comments below.  

Davis Moore <dmoore@nospamieee.org> wrote:

>I disagree with your statements. I believe the answer lies clearly in the
>equation itself.
>
>C = W*Log2(1-S/N) bps
>
>Note the units of the information rate is bits per second. 

Indeed it is bps or "binary digits".

>I have
>no idea how you can say that parity bits are factored out of
>this equation. Are the parity bits not transmitted through the same bandwidth
>limited channel as the bits that you are calling *information bits*?

Yes, indeed they are.  Definitely yes!.  But consider this - if I can put 200
bps (info plus parity) through the noiseless channel, but added noise causes
half of these bits to be corrupt, then I am only guaranteed 100 bps of
un-errored bits out the other end.  These error free bits *might* be the parity
bits, or might not - they are indeed just information bits.  The trick is to
manipulate the bits (code/decode) so that the 100 error-free bits recovered at
the other end are always the useful information bits.  THAT is what the
Shannon-Hartley equation tells us!  It tells us that given a certain amount of
noise, you should be able to get 100 error-free *information* bits per second
out of the channel even though you put 200 bits per second into the channel.
You see! The value C is our design goal!

>Do they not take the same amount of time to be transmitted
>through the channel? Are the parity bits not required to have
>the same average signal power? Are the parity bits immune
>to perturbations in the channel (noise power)?
>
>It seems as though everyone is taking an abstract view of a theorem
>that is based very strongly in physics.
>
>It seems that everyone is creating a convention that encoding bits are not part
>of the information rate - fine, if you want to do that, but the transmission
>channel is still going to be filled by a maximum number of bits per second
>governed by the Shannon-Hartley Theorem.
>
>Bits per second is still bits per second. Anything you do with the information
>after transmission is your interpretation of the information and is not
>covered by the theorem.
>
>A meaningful information data rate can be derived as
>Cmeaningful = Ctotal - Cencoding.
>
>
>Nemo wrote:
>
>> I apologize to all those posting/reading the recent Shannon capacity thread.  I
>> was 100% wrong in my statements about parity bits.  Here is some background...
>>
>> There has been an on-going thread in this news group about Shannon capacity.
>> The original question was about the following equation:
>>
>> C = W*Log2(1-S/N) bps
>>
>> Does the caculated capacity C include parity bits of a coded channel?
>>
>> I now believe the answer is *no, it does not*.   Throughout Shannon's paper, C
>> refers to the *information rate only*.   The above equation shows the
>> relationship between the amount of error-free *information* throughput,
>> bandwidth and S/N.   The information throughput does *not* include the added
>> bits due to whatever coding scheme is chosen.  In the following typical system,
>> the above equation shows the limit to the total info in and out for a channel of
>> a given bandwidth and S/N. The challenge of the engineer is to design the
>> encode/decode and mod/demod functions so as to achieve this limit.
>>
>> info in -> encoding -> modulation -> channel -> demod -> decode -> info out
>>
>> Again, in my previous posts I mis-stated how "parity" bits are considered in the
>> above equation/system.  I apologize for any confusion.
>>
>> Good Day to all.
>>
>> Nemo


Article: 31056
Subject: Re: Shannon Capacity
From: nemo@dtgnet.com (Nemo)
Date: Thu, 10 May 2001 12:42:44 GMT
Links: << >>  << T >>  << A >>
Hi - please read my comments in the newer thread "Shannon Capacity - An Apology"
I think we agree.

Nemo

Muzaffer Kal <muzaffer@dspia.com> wrote:

>On Mon, 07 May 2001 18:46:20 -0700, Vikram Pasham
><Vikram.Pasham@xilinx.com> wrote:
>
>>Berni & all,
>>
>>One of Shannon's paper "A Mathematical Theory of Communication" can be found on
>>the web at
>>http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf
>>
>>As per my understanding, "C" in Shanon's equation includes information +
>>parity.
>
>finally I read this paper and I'd like to submist my 2¢. Please look
>at figure 8. It talks about a transmitter, channel, receiver and
>correction data. Theorem 10 talks about the capacity of a correction
>channel. The idea is if the correction channel has capacity of Hy(x)
>then all the capacity of the normal channel (C) can be utilized
>because correction channel allows all errors to be corrected. So o
>page 22, the channel capacity is defined as C = max(H(x) - Hy(x)). In
>other words, if the correction channel is used the full capacity of
>the original channel is utilized. If correction data is transmitted in
>the channel itself, the capacity of the channel drops so the capacity
>of a real channel doesn't include the correction data.
>Comments please.
>
>Muzaffer
>
>FPGA DSP Consulting
>http://www.dspia.com


Article: 31057
Subject: Re: Shannon Capacity - An Apology
From: nemo@dtgnet.com (Nemo)
Date: Thu, 10 May 2001 12:54:53 GMT
Links: << >>  << T >>  << A >>
Austin, please read carefully my response to Davis' post above.  Stick to the
question of what does C represent in the equation being discussed, and follow
the example of 200 bps put into the channel and only getting out 100 bps of
*error-free* bits do to added noise.  I agree, all bits (info, parity, whatever)
are carried on the channel, but C tells us how many *useful* bits we should be
able to get out the other end of the channel for a given bandwidth and noise
level.  You see - no matter how many parity bits we add to the info on the input
side, the equation tells us we should be able to get C bps of error-free useful
bits out the other end.  I am not sure we are disagreeing.  Yes, the equation
does not distinguish one kind of bit from another, and in that sense, the
analysis does include parity, but in another sense, the equation tells us with
good channel coding and modulation techniques, C bits of *useful* information
should be obtainable *out* of this channel.  In that sense, the value C does
*not* include parity.   

Austin Lesea <austin.lesea@xilinx.com> wrote:

>Davis,
>
>They are not listening.  I should not have fallen back in again.  You clearly
>understand the definition of a channel.  They would prefer you to 'vote', or to say
>who is 'right' or 'wrong' than discuss the issues.
>
>Shannon drew Venn diagrams for his proofs ("Communication in the Presence of Noise,"
>1940), and was challenged because the proof was not "mathematical" enough.  I liked
>the geometrical proofs as they were obvious.  Of course, that troubled the
>mathemeticians who wanted to have a lock on the knowledge, and did not like this idea
>that anything "that complex" could be stated and then proved so elegantly.
>
>Austin
>
>Davis Moore wrote:
>
>> I disagree with your statements. I believe the answer lies clearly in the
>> equation itself.
>>
>> C = W*Log2(1-S/N) bps
>>
>> Note the units of the information rate is bits per second. I have
>> no idea how you can say that parity bits are factored out of
>> this equation. Are the parity bits not transmitted through the same bandwidth
>> limited channel as the bits that you are calling *information bits*?
>
>Hear hear!
>
>>
>> Do they not take the same amount of time to be transmitted
>> through the channel? Are the parity bits not required to have
>> the same average signal power?
>
>Agreed!
>
>> Are the parity bits immune
>> to perturbations in the channel (noise power)?
>
>Nope!  They are not.
>
>>
>> It seems as though everyone is taking an abstract view of a theorem
>> that is based very strongly in physics.
>>
>> It seems that everyone is creating a convention that encoding bits are not part
>> of the information rate - fine, if you want to do that, but the transmission
>> channel is still going to be filled by a maximum number of bits per second
>> governed by the Shannon-Hartley Theorem.
>>
>> Bits per second is still bits per second. Anything you do with the information
>> after transmission is your interpretation of the information and is not
>> covered by the theorem.
>>
>> A meaningful information data rate can be derived as
>> Cmeaningful = Ctotal - Cencoding.
>
>More appropriately, Conlywhat i reallywanted=Ctotal-Cencoding
>
>I would argue that Cmeaningful = Ctotal
>
>Otherwise I can't get Cmeaningful without also having Cencoding! QED, Shannon.
>
>>
>>
>> Nemo wrote:
>>
>> > I apologize to all those posting/reading the recent Shannon capacity thread.  I
>> > was 100% wrong in my statements about parity bits.  Here is some background...
>> >
>> > There has been an on-going thread in this news group about Shannon capacity.
>> > The original question was about the following equation:
>> >
>> > C = W*Log2(1-S/N) bps
>> >
>> > Does the caculated capacity C include parity bits of a coded channel?
>> >
>> > I now believe the answer is *no, it does not*.   Throughout Shannon's paper, C
>> > refers to the *information rate only*.   The above equation shows the
>> > relationship between the amount of error-free *information* throughput,
>> > bandwidth and S/N.   The information throughput does *not* include the added
>> > bits due to whatever coding scheme is chosen.  In the following typical system,
>> > the above equation shows the limit to the total info in and out for a channel of
>> > a given bandwidth and S/N. The challenge of the engineer is to design the
>> > encode/decode and mod/demod functions so as to achieve this limit.
>> >
>> > info in -> encoding -> modulation -> channel -> demod -> decode -> info out
>> >
>> > Again, in my previous posts I mis-stated how "parity" bits are considered in the
>> > above equation/system.  I apologize for any confusion.
>> >
>> > Good Day to all.
>> >
>> > Nemo


Article: 31058
Subject: Re: Shannon Capacity - An Apology
From: Brian Drummond <brian@shapes.demon.co.uk>
Date: Thu, 10 May 2001 14:25:41 +0100
Links: << >>  << T >>  << A >>
On Wed, 09 May 2001 15:39:55 -0600, Davis Moore <dmoore@nospamieee.org>
wrote:

>I disagree with your statements. I believe the answer lies clearly in the
>equation itself.
>
>C = W*Log2(1-S/N) bps

This is an interesting equation since it has the channel capacity
DECREASING as S/N increases(for small S/N), which is intuitively absurd.
Anyone care to explain?

I can't find it in Shannon's paper in this form, however p.47 has

C = W*Log(1+S/N) 

>Note the units of the information rate is bits per second. I have
>no idea how you can say that parity bits are factored out of
>this equation. Are the parity bits not transmitted through the same bandwidth
>limited channel as the bits that you are calling *information bits*?

They are not information by definition; they are redundant. 
(Review Section 7 of Part 1 of the paper to see this)

Therefore they are not included in the channel capacity, though they
must indeed be transmitted through the channel

>It seems as though everyone is taking an abstract view of a theorem
>that is based very strongly in physics.

Agreed.

Remember that noise does not affect the bandwidth of the channel;
therefore it does not change the number of symbols you can transmit.
However it DOES corrupt a fraction of those symbols; therefore you must
reserve a fraction of those symbols for redundancy to make the
transmission reliable.

Reserve too few and comms are unreliable, because you have exceeded the
channel capacity. Reserve too many and comms are reliable; however there
aren't many symbols left to carry _information_, therefore you fall
short of the channel capacity. 

This is what the following extract from the paper is getting at...

-------------- key extract from the paper -----------------
The capacity C of a noisy channel should be the maximum possible rate of
transmission, i.e., the rate when the source is properly matched to the
channel. We therefore define the channel capacity by
C = Max(H(x) - Hy(x))
------------- and ------------------
Hy(x) is the amount of additional information that must be supplied per
second at the receiving point to correct the received message.
------------------------------------

Now although he refers to the error correction as "additional
information" that information rate is explicitly subtracted from the
channel capacity C. Because it doesn't add information to the message.

- Brian

Article: 31059
Subject: Re: Waveforms painting
From: Utku Ozcan <ozcan@netas.com.tr>
Date: Thu, 10 May 2001 17:33:34 +0300
Links: << >>  << T >>  << A >>
Marek Ponca wrote:

> Do you know any free tool for painting digital waveforms ?
> Not for testbench generation, only for documentation.
>
> Marek

...depends how you store the waveforms. Postscript? PDF?

Utku



Article: 31060
Subject: Re: Waveforms painting
From: Marek Ponca <marek.ponca@et.stud.tu-ilmenau.de>
Date: Thu, 10 May 2001 17:17:01 +0200
Links: << >>  << T >>  << A >>
It doesn' matter, how it than will be stored, all I want
is to draw a certain waveforms in some more sophisticated way.

One possibility were to create an behaviour model of such a system,
with that waveforms... 

M.


Utku Ozcan wrote:
> 
> Marek Ponca wrote:
> 
> > Do you know any free tool for painting digital waveforms ?
> > Not for testbench generation, only for documentation.
> >
> > Marek
> 
> ...depends how you store the waveforms. Postscript? PDF?
> 
> Utku

Article: 31061
Subject: Re: Waveforms painting
From: Joerg Ritter <ritter@informatik.uni-halle.de>
Date: Thu, 10 May 2001 17:46:44 +0200
Links: << >>  << T >>  << A >>
TWFyZWsgUG9uY2Egd3JvdGU6DQoNCj4gRG8geW91IGtub3cgYW55IGZyZWUgdG9vbCBmb3Ig
cGFpbnRpbmcgZGlnaXRhbCB3YXZlZm9ybXMgPw0KPiBOb3QgZm9yIHRlc3RiZW5jaCBnZW5l
cmF0aW9uLCBvbmx5IGZvciBkb2N1bWVudGF0aW9uLg0KPg0KPiBNYXJlaw0KDQpIZWxsbyBN
YXJlaywNCg0KaWYgeW91IGFyZSBmYW1pbGFyIHdpdGggTGF0ZXggLCB5b3UgY2FuIHVzZSBh
IHN0eWxlIGZpbGUgbmFtZWQNCnRpbWluZy5zdHkgKHNlZQ0KZnRwOi8vZnRwLmRhbnRlLmRl
L3RleC1hcmNoaXZlL21hY3Jvcy9sYXRleC9jb250cmliL290aGVyL3RpbWluZy8pLg0KV2l0
aCB0aGUgaGVscCBvZiB0aGVzZSBzdHlsZS1maWxlcyB5b3UgY2FuIGNyZWF0ZSB3YXZlZm9y
bXMgd2l0aCB5b3VyDQpmYXZvcml0ZSBlZGl0b3IgYW5kIHRoZSBsYXRleCBlbnZpcm9ubWVu
dCBpbnN0ZWFkIG9mIHBhaXRpbmcgdGhlbS4NCg0KY2lhbw0KSm9lcmcNCg==

Article: 31062
Subject: Leonardo/Modelsim/Xilinx post synthesis simulation (VHDL)
From: Darrell Gibson <nospam@newsranger.com>
Date: Thu, 10 May 2001 16:03:18 GMT
Links: << >>  << T >>  << A >>
I'm experiencing some problems trying to perform functional post-synthesis
simulation on a VHDL net-list generated by Leonardo (v1999.1d build 6.60)
following targeting to a Xilinx xc4000XL device.  The VHDL net-list generated
from Leonardo contains illegal VHDL characters.  Specifically all references to
a net are placed in backslash's. (The net is nx2087 and in the VHDL net-list is
appears as /_nx2087/).  As a result the net-list can not be compiled for
simulation without removing these extra characters.  I'd like to assume that
Leonardo has identified a problem with this particular net and has therefore
inserted the illegal characters to make the designer aware of the problem.  I
have scoured the transcript log and I can find no reason for any a problem
occurring on this net.  I've searched the Leonardo documentation but can find no
reference to this problem.  I have tried changing the target device to other
Xilinx FPGA's.  Using 3000 series the problem disappears.  Using Virtex I have
the same problem but it occurs on a lot more nets.

Has anyone experienced this same problem?  I'd be very grateful for any help.

Thank,

Darrell Gibson. 

Darrell Gibson
Bournemouth University,
P422 Poole House,
Talbot Campus,
POOLE. (UK)

Article: 31063
Subject: Shannon Capacity, a quote from the paper
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Thu, 10 May 2001 09:17:19 -0700
Links: << >>  << T >>  << A >>

--------------58337A3BB5188779B337B728
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I really am going to quit replying:

But I thought I would leave it in Shannon's own words....with a few of my own.

For that, I really do apologize:  Shannon really did a great job explaining it,
and I feel that maybe the best thing to do is read his paper, and then go build
something to prove it to yourself.  My learning experience was a 16QAM modem with
BCH coding.

From "Communication in the Presence of Noise,"  1940, published 1948.

C = W log2 (P+N)/N

.....

"This shows that the rate W log (P+N)/N measures in a sharply defined way the
capacity of the channel for transmitting information.  It is a rather surprising
result, since one would expect that reducing the frequency of errors would require
reducing the rate of transmission, and that the rate must approach zero as the
error frequency does.  Actually we can send at the rate C but reduce errors by
using a more involved encoding and longer delays at the transmitter and receiver."

(emphasis added by me)

I begin to see what is so confusing.  We have a bunch of people with only two
fingers.

The bits (C) are encoded into some kind of symbols that are sent over the channel
(tones, phases, voltages, currents, photons, wavelengths) in some kind of
information coding scheme (BCH code, Virterbi, Turbo, Hamming....).  The symbols
are recovered at the receiver, and decoded (corrected) and presented as the
received information.

The theorem does not concern itself with the actual rate of the encoded symbols in
the channel or even their format, but rather their power vs. bandwidth and the
power and bandwidth of the noise.

If one confines oneself to using a binary symbol in the channel (on off, 1-0,
etc), then for every bit in C, there must be some extra bits added for correction
in the channel to perform error correction.  But if we increate the rate of the
bits in the channel, we get more errors for a given N, keeping P constant.  Thus,
C ends up including the error correction bits because the channel C must be equal
to or greater than the information C.

Thus, we must reduce the number of bits in the channel per unit time given a
specific error correcting code to achieve some desired performance.

If we instead use arbitrarily more complex symbols (QPSK, 16OQAM, OFDM ....), then
for every bit in C, there are bits added to the channel, but the rate of the
symbols may now actually be much less than that of the rate of the binary channel
(PSK = 2 bits per symbol, 16QAM = 4 bits/symbol, etc.).  This added degree of
freedom in effect allows for many more bits in the channel to be utilized for
error correction without increasing the bandwidth W of the channel, or the power P
of the information in the channel.  C (rate in b/s) in the channel is greater than
the C (rate in b/s) of the information, allowing for the error checking and
correcting information.

Thus, if one allows for complex symbols (base > 2), one begins to take advantage
of the encoding and decoding, without affecting the rate of the symbols in the
channel.

Austin

Brian Drummond wrote:

> On Wed, 09 May 2001 15:39:55 -0600, Davis Moore <dmoore@nospamieee.org>
> wrote:
>
> >I disagree with your statements. I believe the answer lies clearly in the
> >equation itself.
> >
> >C = W*Log2(1-S/N) bps
>
> This is an interesting equation since it has the channel capacity
> DECREASING as S/N increases(for small S/N), which is intuitively absurd.
> Anyone care to explain?
>
> I can't find it in Shannon's paper in this form, however p.47 has
>
> C = W*Log(1+S/N)
>
> >Note the units of the information rate is bits per second. I have
> >no idea how you can say that parity bits are factored out of
> >this equation. Are the parity bits not transmitted through the same bandwidth
> >limited channel as the bits that you are calling *information bits*?
>
> They are not information by definition; they are redundant.
> (Review Section 7 of Part 1 of the paper to see this)
>
> Therefore they are not included in the channel capacity, though they
> must indeed be transmitted through the channel
>
> >It seems as though everyone is taking an abstract view of a theorem
> >that is based very strongly in physics.
>
> Agreed.
>
> Remember that noise does not affect the bandwidth of the channel;
> therefore it does not change the number of symbols you can transmit.
> However it DOES corrupt a fraction of those symbols; therefore you must
> reserve a fraction of those symbols for redundancy to make the
> transmission reliable.
>
> Reserve too few and comms are unreliable, because you have exceeded the
> channel capacity. Reserve too many and comms are reliable; however there
> aren't many symbols left to carry _information_, therefore you fall
> short of the channel capacity.
>
> This is what the following extract from the paper is getting at...
>
> -------------- key extract from the paper -----------------
> The capacity C of a noisy channel should be the maximum possible rate of
> transmission, i.e., the rate when the source is properly matched to the
> channel. We therefore define the channel capacity by
> C = Max(H(x) - Hy(x))
> ------------- and ------------------
> Hy(x) is the amount of additional information that must be supplied per
> second at the receiving point to correct the received message.
> ------------------------------------
>
> Now although he refers to the error correction as "additional
> information" that information rate is explicitly subtracted from the
> channel capacity C. Because it doesn't add information to the message.
>
> - Brian

--------------58337A3BB5188779B337B728
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
<html>
I really am going to quit replying:
<p>But I thought I would leave it in Shannon's own words....with a few
of my own.
<p>For that, I really do apologize:&nbsp; Shannon really did a great job
explaining it, and I feel that maybe the best thing to do is read his paper,
and then go build something to prove it to yourself.&nbsp; My learning
experience was a 16QAM modem with BCH coding.
<p>From "Communication in the Presence of Noise,"&nbsp; 1940, published
1948.
<p>C = W log2 (P+N)/N
<p>.....
<p>"This shows that the rate W log (P+N)/N measures in a sharply defined
way the capacity of the channel for transmitting information.&nbsp; It
is a rather surprising result, since one would expect that reducing the
frequency of errors would require reducing the rate of transmission, and
that the rate must approach zero as the error frequency does.&nbsp; Actually
we can send at the rate C but reduce errors by using a <i>more involved
encoding</i> <u>and</u> <i>longer delays at the transmitter and receiver</i>."
<p>(emphasis added by me)
<p>I begin to see what is so confusing.&nbsp; We have a bunch of people
with only two fingers.
<p>The bits (C) are encoded into some kind of symbols that are sent over
the channel (tones, phases, voltages, currents, photons, wavelengths) in
some kind of information coding scheme (BCH code, Virterbi, Turbo, Hamming....).&nbsp;
The symbols are recovered at the receiver, and decoded (corrected) and
presented as the received information.
<p>The theorem does not concern itself with the actual rate of the encoded
symbols in the channel or even their format, but rather their power vs.
bandwidth and the power and bandwidth of the noise.
<p>If one <b>confines oneself to using a binary symbol</b> in the channel
(on off, 1-0, etc), then for every bit in C, there must be some extra bits
added for correction in the channel to perform error correction.&nbsp;
But if we increate the rate of the bits in the channel, we get more errors
for a given N, keeping P constant.&nbsp; Thus, C ends up including the
error correction bits because the channel C must be equal to or greater
than the information C.
<p>Thus, we must <b>reduce</b> the number of bits in the channel per unit
time <b>given a specific error correcting code</b> to achieve some desired
performance.
<p>If we instead use <b>arbitrarily more complex symbols</b> (QPSK, 16OQAM,
OFDM ....), then for every bit in C, there are bits added to the channel,
but the rate of the symbols may now actually be much less than that of
the rate of the binary channel (PSK = 2 bits per symbol, 16QAM = 4 bits/symbol,
etc.).&nbsp; This added degree of freedom in effect allows for many more
bits in the channel to be utilized for error correction without increasing
the bandwidth W of the channel, or the power P of the information in the
channel.&nbsp; C (rate in b/s) in the channel is greater than the C (rate
in b/s) of the information, allowing for the error checking and correcting
information.
<p>Thus, if one allows for complex symbols (base > 2), one begins to take
advantage of the encoding and decoding, <b>without affecting the rate of
the symbols in the channel</b>.
<p>Austin
<p>Brian Drummond wrote:
<blockquote TYPE=CITE>On Wed, 09 May 2001 15:39:55 -0600, Davis Moore &lt;dmoore@nospamieee.org>
<br>wrote:
<p>>I disagree with your statements. I believe the answer lies clearly
in the
<br>>equation itself.
<br>>
<br>>C = W*Log2(1-S/N) bps
<p>This is an interesting equation since it has the channel capacity
<br>DECREASING as S/N increases(for small S/N), which is intuitively absurd.
<br>Anyone care to explain?
<p>I can't find it in Shannon's paper in this form, however p.47 has
<p>C = W*Log(1+S/N)
<p>>Note the units of the information rate is bits per second. I have
<br>>no idea how you can say that parity bits are factored out of
<br>>this equation. Are the parity bits not transmitted through the same
bandwidth
<br>>limited channel as the bits that you are calling *information bits*?
<p>They are not information by definition; they are redundant.
<br>(Review Section 7 of Part 1 of the paper to see this)
<p>Therefore they are not included in the channel capacity, though they
<br>must indeed be transmitted through the channel
<p>>It seems as though everyone is taking an abstract view of a theorem
<br>>that is based very strongly in physics.
<p>Agreed.
<p>Remember that noise does not affect the bandwidth of the channel;
<br>therefore it does not change the number of symbols you can transmit.
<br>However it DOES corrupt a fraction of those symbols; therefore you
must
<br>reserve a fraction of those symbols for redundancy to make the
<br>transmission reliable.
<p>Reserve too few and comms are unreliable, because you have exceeded
the
<br>channel capacity. Reserve too many and comms are reliable; however
there
<br>aren't many symbols left to carry _information_, therefore you fall
<br>short of the channel capacity.
<p>This is what the following extract from the paper is getting at...
<p>-------------- key extract from the paper -----------------
<br>The capacity C of a noisy channel should be the maximum possible rate
of
<br>transmission, i.e., the rate when the source is properly matched to
the
<br>channel. We therefore define the channel capacity by
<br>C = Max(H(x) - Hy(x))
<br>------------- and ------------------
<br>Hy(x) is the amount of additional information that must be supplied
per
<br>second at the receiving point to correct the received message.
<br>------------------------------------
<p>Now although he refers to the error correction as "additional
<br>information" that information rate is explicitly subtracted from the
<br>channel capacity C. Because it doesn't add information to the message.
<p>- Brian</blockquote>
</html>

--------------58337A3BB5188779B337B728--


Article: 31064
Subject: Re: 32 bit limit on integers
From: Hagen Ploog <hp@e-technik.uni-rostock.de>
Date: Thu, 10 May 2001 18:42:12 +0200
Links: << >>  << T >>  << A >>


Alan Glynne Jones wrote:

> For the purposes of creating a random number generator in a testbench I
> want to use integers with ranges which exceed 32 bits.  Is there a way
> to do this?
>
> Alan

We build a multi-precision -library to handle numbers up 1024 bit and more
(=> 310 decimals).


Hagen



Article: 31065
Subject: Re: Spartan Annoyances
From: muzaffer@dspia.com
Date: Thu, 10 May 2001 19:22:40 +0100
Links: << >>  << T >>  << A >>
John  Larkin <jjlarkin@highlandSNIPTHIStechnology.com> wrote:

>Hi,
>
>we've just shipped a new product that uses 5 Spartan-XL chips, and had
>a fair amount of grief getting things up. We're using the basic
>Foundation schematic-entry stuff, with service packs duly installed.
>
>The worst thing we found is that, if we connect a signal to a pin and
>don't reference it on the schematic, the pin will sometimes become an
>output and do weird things. In one case, we ran the VME signal AS*
>(address strobe) into the chip and decided later we didn't need it, so
>ignored it. It then began hanging the VME bus by pulling AS* low, but
>only when certain VME addresses were used!

The problem is that the output gets connected to an internally
generated net so you are observing the state of some internal logic.
The solution is to muck with the edf or the similar netlist and force
the pin to an input manually. I have seen this problem too and it is
annoying but solvable. By using the and gate you are getting the tool
do the input setting on the pin.



Article: 31066
Subject: Finally, an FPGA tool chain for Linux (Altera Quartus II)
From: Eric Smith <eric-no-spam-for-me@brouhaha.com>
Date: 10 May 2001 11:27:34 -0700
Links: << >>  << T >>  << A >>
Altera has announced a port of Quartus II (including MAX+PLUS II)
to Linux!

  http://www.altera.com/corporate/press_box/releases/pr-linux_quartus.html
  http://www.businesswire.com/cgi-bin/f_headline.cgi?bw.050701/211270104

At last, one of the programmable logic vendors gets it.  They say "Linux
has enjoyed dramatic success over the last several years as a platform
for a variety of EDA point tools, such as simulation, because of the low
cost per compute cycle."  An interesting contrast from Xilinx' claims
that there is no customer demand for Linux.

The availability is a bit confusing; the press release says "Altera
[...] today announced plans to port the Quartus (R) II design software
to Linux-based environments".  Later it says "The Quartus II version 1.0
software is available today", by which they presumably mean a non-Linux
version?

On 29-Jun-2000, a Cypress employee told me that they were working on
Linux support for a future release of their Warp software, which they
expected to have developed within a year.  It will be interesting to see
whether they follow through.

Do any other programmable logic vendors support Linux?


Article: 31067
Subject: Re: Finally, an FPGA tool chain for Linux (Altera Quartus II)
From: Kolja Sulimma <kolja@prowokulta.org>
Date: Thu, 10 May 2001 21:17:07 +0200
Links: << >>  << T >>  << A >>
I am not familiar with the Altera tools, but I am wondering why Xilinx has no
ambitions to port their software.
It should be quite easy.

Coregen for example should work under linux right away.
The backend tools probably do not use more than stdio and should run within a
couple of hours.
The GUI frontend has been copmletely rewritten recently, and with the use of
the right GUI toolkit (Java?)
multi platform support should have been not much of a problem.
FGPA Express would maybe be more complicated, but thats what XST is for, isn't
it?

On the other hand: Large parts of tools have not even been ported to Windows
95 yet:
No long file names, and the backend tools run in a 16-Bit environment.

Also: Did anybody try to run Foundation in a NT multi user environment?
Write permission for everybody to a couple of files and directories, and tool
preferences are kept globaly,
not in a user directory, as it is common with almost every UNIX tool.

Kolja Sulimma

At least

Eric Smith wrote:

> Altera has announced a port of Quartus II (including MAX+PLUS II)
> to Linux!
>
>   http://www.altera.com/corporate/press_box/releases/pr-linux_quartus.html
>   http://www.businesswire.com/cgi-bin/f_headline.cgi?bw.050701/211270104
>
> At last, one of the programmable logic vendors gets it.  They say "Linux
> has enjoyed dramatic success over the last several years as a platform
> for a variety of EDA point tools, such as simulation, because of the low
> cost per compute cycle."  An interesting contrast from Xilinx' claims
> that there is no customer demand for Linux.
>
> The availability is a bit confusing; the press release says "Altera
> [...] today announced plans to port the Quartus (R) II design software
> to Linux-based environments".  Later it says "The Quartus II version 1.0
> software is available today", by which they presumably mean a non-Linux
> version?
>
> On 29-Jun-2000, a Cypress employee told me that they were working on
> Linux support for a future release of their Warp software, which they
> expected to have developed within a year.  It will be interesting to see
> whether they follow through.
>
> Do any other programmable logic vendors support Linux?


Article: 31068
Subject: Re: Finally, an FPGA tool chain for Linux (Altera Quartus II)
From: Rick Filipkiewicz <rick@algor.co.uk>
Date: Thu, 10 May 2001 21:01:32 +0100
Links: << >>  << T >>  << A >>


Eric Smith wrote:
> 
> Altera has announced a port of Quartus II (including MAX+PLUS II)
> to Linux!
> 
>   http://www.altera.com/corporate/press_box/releases/pr-linux_quartus.html
>   http://www.businesswire.com/cgi-bin/f_headline.cgi?bw.050701/211270104
> 
> At last, one of the programmable logic vendors gets it.  They say "Linux
> has enjoyed dramatic success over the last several years as a platform
> for a variety of EDA point tools, such as simulation, because of the low
> cost per compute cycle."  An interesting contrast from Xilinx' claims
> that there is no customer demand for Linux.
> 
> 

I wonder if Xilinx's attitude stems from these 2 (my opinion only)
equivalences:

(1) Linux user <=> serious user

(2) GUI user <=> !serious user


From which we can derive

(3) Linux user <=> !GUI user

Given that VMWare provides a way of using the command line tools under
Linux then 
(3) => Xilinx doesn't have to do a Linux port.


Question: Do Quartus & Warp tools run from the command line as well as
the Xilinx ones do ? are the command line i/f's as well documented ?

<snip>

Article: 31069
Subject: Re: Shannon Capacity - An Apology
From: Bertram Geiger <bgeiger@aon.at>
Date: Thu, 10 May 2001 22:21:04 +0200
Links: << >>  << T >>  << A >>
Brian Drummond schrieb:
> 
> On Wed, 09 May 2001 15:39:55 -0600, Davis Moore <dmoore@nospamieee.org>
> wrote:
> 
> >I disagree with your statements. I believe the answer lies clearly in the
> >equation itself.
> >
> >C = W*Log2(1-S/N) bps
> 
> This is an interesting equation since it has the channel capacity
> DECREASING as S/N increases(for small S/N), which is intuitively absurd.
> Anyone care to explain?


Must be:  C = W*Log2(1+S/N) bps
Thats what i learned and also found in Shannons paper
otherwise there would be Log of a negative number, or did i miss
something ?

bertram
-- 
Bertram Geiger,  bgeiger@aon.at
HTL Bulme Graz-Goesting - AUSTRIA

Article: 31070
Subject: Re: Shannon Capacity - An Apology
From: nemo@dtgnet.com (Nemo)
Date: Thu, 10 May 2001 20:43:03 GMT
Links: << >>  << T >>  << A >>
Bertram Geiger <bgeiger@aon.at> wrote:

>Brian Drummond schrieb:
>> 
>> On Wed, 09 May 2001 15:39:55 -0600, Davis Moore <dmoore@nospamieee.org>
>> wrote:
>> 
>> >I disagree with your statements. I believe the answer lies clearly in the
>> >equation itself.
>> >
>> >C = W*Log2(1-S/N) bps
>> 
>> This is an interesting equation since it has the channel capacity
>> DECREASING as S/N increases(for small S/N), which is intuitively absurd.
>> Anyone care to explain?
>
>
>Must be:  C = W*Log2(1+S/N) bps
>Thats what i learned and also found in Shannons paper
>otherwise there would be Log of a negative number, or did i miss
>something ?
>
>bertram

You and Brian are correct, it should be a plus sign.

Nemo

Article: 31071
Subject: Reading Data on Parallel Port
From: vikram m n rao <vmrao@students.uiuc.edu>
Date: Thu, 10 May 2001 17:16:03 -0500
Links: << >>  << T >>  << A >>

I am using an XESS XS40 prototyping board and I have a
verilog program that, when synthesized, stores a 9 bit value in a register
which is updated at certain points in time.

Here's the problem: I need to get the data from this register into my PC
through the parallel port. I don't have much experience with this, so I'm
looking for code examples of how to read the data in on the PC, and also
how I would go about sending the data out since, according to the docs, I
can only use the 5 status pins of the parallel port to send data from the
FPGA to the PC, but I have 9-bit addresses I need to send. I'm assuming
I'll have to shift the data out bit by bit and also generate a clock
signal on one of those 5 status pins, but I'm vague on the actual
implementation.

I'm sure many of you have either used these boards or have had to code
something similar before, so any code examples, links to code examples, or
general advice would be greatly appreciated. I got one response
last time I posted this, but I'm looking for something more along
the lines of code samples. Thanks!

Vik


Article: 31072
Subject: fpga tutorials
From: u687591552@spawnkill.ip-mobilphone.net
Date: Thu, 10 May 2001 22:29:37 GMT
Links: << >>  << T >>  << A >>
I am a software engineer and know little about FPGA. There are quite few tutorials on the
web. I am looking for a good tutorial suitable for my background. Any one has
some recommandations?
I'm sorry if I post it to a wrong place.

GT 
 



-- 
Sent by guangbutun from hotmail  within  area com
This is a spam protected message. Please answer with reference header.
Posted via http://www.usenet-replayer.com/cgi/content/new

Article: 31073
Subject: Synplicity online support problem
From: Rick Filipkiewicz <rick@algor.co.uk>
Date: Fri, 11 May 2001 00:18:52 +0100
Links: << >>  << T >>  << A >>

I'm having trouble logging on to the Synplicity online support service
(SOS). Anyone else had any problems with this ?

Article: 31074
Subject: Re: Finally, an FPGA tool chain for Linux (Altera Quartus II)
From: Eric Smith <eric-no-spam-for-me@brouhaha.com>
Date: 10 May 2001 17:00:03 -0700
Links: << >>  << T >>  << A >>
Rick Filipkiewicz <rick@algor.co.uk> writes:
> Given that VMWare provides a way of using the command line tools under
> Linux then 
> (3) => Xilinx doesn't have to do a Linux port.

I use VMware.  But running Windows software under VMware is really
annoying compared to running native Linux software.

VMware supports more than just the command line tools.  Since it runs
real Windows, it runs the GUI stuff too.

However, I'd much prefer to completely remove Windows from my system.
Right now the Xilinx software is the ONLY reason I have Windows installed.
I haven't looked at Altera's FPGAs in a few years, but if their parts will
do what I want, having native tools for Linux may well make me switch.

The Xilinx Windows GUI wrapper is nice in some ways, but I'd trade it
for command-line-only tools under Linux without any hesitation whatsoever.

But if Xilinx ever does decide to support Linux, there are Windows GUI
compatability libraries they could use to port the GUI.  My personal
preference would be to get the command line tools (only) in a first Linux
release, rather than holding up a Linux release until the GUI is ported.

However, given how hostile to Linux Xilinx seems to be, I'm not holding
my breath.



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search