Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 2025

Article: 2025
Subject: Re: Xilinx security
From: "Steven K. Knapp, Xilinx, Inc." <stevek>
Date: 3 Oct 1995 18:31:30 GMT
Links: << >>  << T >>  << A >>
davem@hbmltd.demon.co.uk wrote:
>

>> EPLDs:  Use a device with _multiple_ security bits.  Most of the devices on 
>the
>
>But if you can get the bit-stream directly from the serial EEPROM or monitor 
>CLK and DIN, the security bits don't make any difference.
>
Sorry if there is any confusion from my previous message.  The XC1700 Serial
Configuration PROMs do not contain a security bit and you are correct with the
previous statement.  I was referring to the XC7300 EPLDs which are programmable
logic devices with internal EPROM.  EPLDs boot from internal EPROM and can
consequently be protected by a security bit.  Multiple security bits mean
extra security.

-- Steve Knapp
   Corporate Applications Manager
   Xilinx, Inc.



Article: 2026
Subject: Xact to VST
From: slee@hpl.hp.com (Steven Lee)
Date: Tue, 3 Oct 1995 18:55:47 GMT
Links: << >>  << T >>  << A >>
HI David,
	Your e-mail address was truncated so I 
	couldn't reply to you.  Send me e-mail
	directly and I can answer your questions
	about VST & Xact.

	steve



Article: 2027
Subject: Generic use of Serial Configuration EPROMs
From: gerry@marl.research.panasonic.com (Gerald T. Caracciolo)
Date: Tue, 3 Oct 1995 20:03:55 GMT
Links: << >>  << T >>  << A >>
Hi,

I have a need to initialize some fast RAMS with a lookup table
for a design based on Altera Flex 8000's.  I wanted to do this via an
EPROM that would get downloaded into the RAMS upon a system reset, with
logic implemented on the FPGA.  I am quickly running out of I/O pins and 
thought that I could use one of the serial EPROMS used for FPGA configuration,
since they would use up only four pins of the FPGA instead of 24 (address + data) I would need for a standard parallel EPROM.   

Has anyone done anything like this before?  Any guidelines, gotcha's, etc?
I'm relatively new to designing with these things (FPGA's and serial EPROMS). 
Also, other than the serial EPROM size, is there any real difference in 
operation between one say from Xilinx vs. one from Altera? 

Thanks in advance.

Gerry Caracciolo
-- 
Gerald T. Caracciolo                Phone: (609) 386-5995  Ext: 107
Matsushita Applied Research Labs.   Fax: (609) 386-4999
95D Conneticut Drive 	            e-mail: gerry@marl.research.panasonic.com 
Burlington, NJ 08016


Article: 2028
Subject: Re: cheap (free) fpga design software (VHDL
From: kugel@mp-sun6.informatik.uni-mannheim.de (Andreas Kugel)
Date: 4 Oct 1995 06:58:11 GMT
Links: << >>  << T >>  << A >>
In article e5b@newsgate.sps.mot.com, jeffh@oakhill-csic.sps.mot.com (Jeff Hunsinger) writes:
>VHDL doesn't perform the synthesis, the synthesis tool does. The language used to
>describe your circuit (VHDL, Verilog, etc) is a separate issue. I agree, however,
>that using a free tool can give more headaches than it's worth. If you can't
>afford the mainstream tools, you'll have a tough time doing any "real" designs.
>However, there are some tools that are a bit less expensive. $1000 seems to be the
>minimum.
>
>Jeff
> 
This might imply that mainstream tools keep the headache away. They DON`T !
>From our experience the current tools are not capable of synthesizing a design
with a fully functional description into a fast running FPGA design.
(Fact: 9 bit counter runs not above 20MHz in 4005-5).
It seems that most of the work is still in building libraries (manually!)
and then use a structural description for commercial and private components.

This is not what I would pay real money for.

SO give the free/cheap tools a chance - they may be not worse than the expensive ones.


---


--------------------------------------------------------
Andreas Kugel                
Chair of Computer Science V       Phone:(49)621-292-5755
University of Mannheim            Fax:(49)621-292-5756
A5
D-68131 Mannheim
Germany
e-mail:kugel@mp-sun1.informatik.uni-mannheim.de
--------------------------------------------------------



Article: 2029
Subject: Memory Protection Fault
From: an222663@anon.penet.fi (cha)
Date: Wed, 4 Oct 1995 10:59:23 UTC
Links: << >>  << T >>  << A >>



Hello,

	a few days ago XACT 5.1 gave a Memory Protection Fault had occured.
	I solved the problem changing the design (one element). However, the
	same problem has occured today. This time with Xsimmake (vff) when
	check.exe was running (the first time was Xmake whichs failed).

	I use a PC with QEMM, and everybody says I must change my memory
	configuration (you know Phar Lab is not compatible with any memory
	gestor). However, I don't know what to change on it. Has anyone
	done this before?  Can you help me? 

			     Please, HELP ME!!!!!!!!!!

				       Thanks in advance,

					     Fernando.

        Please, do not send me stupid answers. I could reply ........ 



--****ATTENTION****--****ATTENTION****--****ATTENTION****--***ATTENTION***
Your e-mail reply to this message WILL be *automatically* ANONYMIZED.
Please, report inappropriate use to                abuse@anon.penet.fi
For information (incl. non-anon reply) write to    help@anon.penet.fi
If you have any problems, address them to          admin@anon.penet.fi


Article: 2030
Subject: Re: REPOST: Design Contest Write-up ( was "Jury Verdict + Test Benches" )
From: ekatjr@eua.ericsson.se (Robert Tjarnstrom)
Date: 4 Oct 1995 13:49:13 GMT
Links: << >>  << T >>  << A >>
In article <44coqq$100@news.Belgium.EU.net>, Jan Decaluwe <jand> writes:
>
>Let's be specific (as this is hopefully not just a coffee shop discussion). 
>The original topic of this thread was "abstract" typing, i.e. using integers, 
>booleans and enum types instead of std_logic types. This is one of the methods 
>to raise the abstraction level, with *tremendous* advantages in terms of code 
>clarity, maintainability and reusability. On the other hand, using these types 
>has no disadvantage in terms of RTL synthesis results (Synopsys DC). 
>If something has only advantages and no disadvantages, one should obviously do it.
>

Consider the plot below


(clock freq scaled according to microprocessor clock freq)

    ^
1   | --------------------------------
    |       *               
    |             *           
    |                  *       
0.2 |                       *
    |_________________________________ >  (year)
     80                     95

where - represents microprocessors and * represents ASIC designs.

Currently microprocessors perform about 5 times better than ASIC design, although
using comparable manufacturing technology. What is the poor process utilization 
due to?

Will a move to "higher abstraction levels" increase or decrease the gap?

When will more and more system designers think that microprocessors outperform
HW design and stop ASIC development?


Robert Tjarnstrom
(Curious about your answers)





Article: 2031
Subject: Re: REPOST: Design Contest Write-up ( was "Jury Verdict + Test Benches" )
From: hutch@timp.ee.byu.edu (Brad Hutchings)
Date: 04 Oct 1995 15:30:17 GMT
Links: << >>  << T >>  << A >>
>>>>> "Robert" == Robert Tjarnstrom <ekatjr@eua.ericsson.se> writes:
In article <44u3cp$94s@euas20.eua.ericsson.se> ekatjr@eua.ericsson.se (Robert Tjarnstrom) writes:
    Robert> (clock freq scaled according to microprocessor clock freq)

<picture deleted>

    Robert> where - represents microprocessors and * represents ASIC
    Robert> designs.

    Robert> Currently microprocessors perform about 5 times better
    Robert> than ASIC design, although using comparable manufacturing
    Robert> technology. What is the poor process utilization due to?

This is probably a bit off of the original topic but I think that
it is misleading to look only at the *clock rates* of microprocessors
and ASICs. An ASIC may have a much lower clock rate than a microprocessor
but, due to specialization, concurrency, etc., may achieve a much higher
throughput rate than the microprocessor. FPGA-based systems that
we build around here typically outperform microprocessor systems (5x - 10x)
yet run at only approximately 1/10 to 1/5 of the microprocessor's clock
rate. Clock rate is only part of the story...









--
            Brad L. Hutchings - (801) 378-2667 - hutch@ee.byu.edu 
Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602
                       Reconfigurable Logic Laboratory



Article: 2032
Subject: Re: Generic use of Serial Configuration EPROMs
From: peter@xilinx.com (Peter Alfke)
Date: 4 Oct 1995 16:35:14 GMT
Links: << >>  << T >>  << A >>
In article <1995Oct3.200355.4487@MITL.Research.Panasonic.COM>,
gerry@marl.research.panasonic.com (Gerald T. Caracciolo) wrote:

> Hi,
> 
> I have a need to initialize some fast RAMS with a lookup table
> for a design based on Altera Flex 8000's.  I wanted to do this via an
> EPROM that would get downloaded into the RAMS upon a system reset, with
> logic implemented on the FPGA. ....

You may want to read the 5-page functional description for the Xilinx
XC17000 devices on pages 2-231 through 235 of the Xilinx Data Book,
especially Table 1 on page 2-235. It explains the control inputs pretty
well.
You need only 3 pins: clock and reset driving the SPROM, and DATA as an
output from the SPROM. Plus Power and Ground. And ABSOLUTELY connect the
Vpp pin to 5 V, otherwise you get temperature-dependent errors.
The SPROM has its own power-on reset, but it is a good idea to provide
your own reset signal before you read serial data. Note that the polarity
of reset is programmable. 
These devices are no speed-demons, some work only up to 5 MHz, others up
to 10 MHz, see the clock-to-out value in the ac tables. You may want to
use a slew-rate limited clock, if your programmable logic device allows
you that option.

The only "gotcha's" reported to me had to do with power-on reset problems
with non-Xilinx SPROMs, that's why I recommend the explicit reset before
starting to read the content.

Peter Alfke, Xilinx Applications


Article: 2033
Subject: Prohibit Local lines during routing ?
From: gbhullar@doe.carleton.ca (Gupreet S. Bhullar)
Date: Wed, 4 Oct 1995 20:14:43 GMT
Links: << >>  << T >>  << A >>
Hi,
	Is there any way to avoid a segment from being routed as for
example "prohibit site" in the neocad's epic. This is required in case
the segment has been found faluty during testing and the circuit is to
be reconfigured. I know for sure this feature is not available in epic
or the xilinx XACT, if anybody knows of this feature being supported
by some other software please let me know. 
	Also is there any public domain software(experimental)
 for routing and/or placing in xc???? devices, which can provide hooks
for subroutines for additional features ?
	I would appreciate any information in this regard. Thanks in
advance.
Best regards,
Gurpreet.
 



Article: 2034
Subject: !!! Trolling For User Oriented DAC Ideas !!!
From: jcooley@world.std.com (John Cooley)
Date: Wed, 4 Oct 1995 20:37:08 GMT
Links: << >>  << T >>  << A >>

Last week quite a few of the USE/DA board members got together in
San Jose, CA and kicked around some proposals for panels and tutorials
we're interested in sponsoring at the upcoming DAC.  Right now we
have some ideas we hope users will find helpful but, to be sure,
we'd like to ask: "What do you think would make for an interesting
user-oriented panel/tutorial at DAC?"

Time is tight!  (The DAC proposal deadline is Friday, Oct. 6th!)
Please send you ideas to "jcooley@world.std.com" ASAP!   Thanks for
your time and help!  :^)

                - John Cooley, President
                  Users Society for Electronic Design Automation (USE/DA)

-----------------------------------------------------------------------------
  __))  "Glass ceilings? Name ANY goat farmer who's made it into management!"
 /_ oo  
  (_ \   Holliston Poor Farm                                   - John Cooley
%//  \"  Holliston, MA 01746-6222              part time Sheep & Goat Farmer
%%%  $   jcooley@world.std.com       full time contract ASIC & FPGA Designer



Article: 2035
Subject: Consultant Needed
From: rdeux@team-usa.com
Date: 4 Oct 1995 23:14:15 GMT
Links: << >>  << T >>  << A >>
I have a client who is in need of a consultant experienced in FPGA design with VIEWlogic design tools. 
The assignment is about 4 weeks in duration and will require you to be on site. The location is in Petaluma, 
CA. Should you be interested, or know of anyone who is, please contact me ASAP. Renumeration is 
negotiable.

Raymond Deux
TEAM Corporation
(408) 437-1313, x277
rdeux@team-usa.com


Article: 2036
Subject: >Received: by digitech.co.nz (UUPC/extended 1.12o);
From: jma@gotham.super.org
Date: Thu, 5 Oct 1995 01:43:07 GMT
Links: << >>  << T >>  << A >>

Hi Folks,

When we recently needed to look into battery back etc, I searched through
lots of old xilinx manuls, Best of XCELL, & app notes, etc, to gleen more
info.  Some of the old writings were quite open about how things were
done and what was going on.  Not so much market speak, eh.  Anyway,
this is what I concluded:-

Xilinx once said in the XC2000 & XC3000 days that their powerdown
current was naff all.  The function generators still work, so if it
isn't naff all, then make sure the powerdown state of the IOB etc is
not causing contention or oscilation within the device.  If you look
at the way the IOB's and CLB flipflops appear to behave to the
configured logic, it is all quite nicely done.  They must have given
quite a bit of thought to this in the early days.

They also said their configuration RAM was very reliable vs typical
SRAM (Cross coupled invrters, 5K, vs resistor pullup, 5M).

They state 3100 (and XC4000?) are not suitable for battery back, very
low current applications.  Our tests have confirmed this.  The
powered down current varies with temperture & probably from device to
device.  Have they changed their configruation RAM implementation to
one that uses resistive pullup in order to shrink the die in newer
parts?  If so, what type of config RAM does the XC5000 use?

Michael



Article: 2037
Subject: Need Some Test Data
From: NAM MIN WOO <esfree>
Date: 5 Oct 1995 10:04:30 GMT
Links: << >>  << T >>  << A >>
I develop the partition program.
It work real design circuit, so I need a lot of test data.
If anybody have design using VHDL, allow to access it for testing.
The larger design, The better.
Please, Help me.

e-mail : part95@dalab2.sogang.ac.kr



Article: 2038
Subject: Need Some Test Data
From: NAM MIN WOO <esfree>
Date: 5 Oct 1995 10:05:43 GMT
Links: << >>  << T >>  << A >>
I develop the partition program.
It work real design circuit, so I need a lot of test data.
If anybody have design using VHDL, allow to access it for testing.
The larger design, The better.
Please, Help me.

e-mail : part95@dalab2.sogang.ac.kr



Article: 2039
Subject: Re: NEW person
From: kgold@watson.ibm.com (Ken Goldman)
Date: 5 Oct 1995 14:09:45 GMT
Links: << >>  << T >>  << A >>
wirthlim@fpga.ee.byu.edu (Michael J. Wirthlin) writes:
> 
> Terry Bailey <INTERNET@SIU.EDU> writes:
>> I just got through with a design that has entirely toooooo many chips on 
>> it and want to investigate FPGA's and PLD's.  Can someone tell me what 
>> the difference is and why I would use one over the other to put a lot of 
>> my and, or, flipflops, counters into??  Thanks  
> 
> Although the distinction between PLDs and FPGAs is becoming more unclear,
> there are a number of general differences between the two:
> 
> PLD's:
>    - have high fan-in capability
>    - more predictable routing
> 
> FPGA's:
>    - usually register rich
>    - lower fan-in than PLD's
>    - provide rich, yet complex routing
> 
> The general rule-of-thumb I have heard is that PLDs are better suited 
> for complex control and FGPAs are better suited for datapath applications. 
> -- 

Agreed.

But if we're going to be candid, I'll bet that a major selection
criterion is "we've used a ____ before, we're familiar with
it, and we've already invested in the tools and training."

Unless you're really pushing the limits of cost, size, or performance,
either is suitable for sucking up random logic. IMHO.


Article: 2040
Subject: Re: Memory Protection Fault
From: "Steven K. Knapp, Xilinx, Inc." <stevek>
Date: 5 Oct 1995 15:26:18 GMT
Links: << >>  << T >>  << A >>
For Xilinx-related technical support questions, you can receive faster
E-mail service by contacting 'hotline@xilinx.com' directly.

These questions are answered by our application engineers at Xilinx
headquarters in California.

There is also automated support via the Xilinx webLINX Web site at

http://www.xilinx.com/support/techsupp.htm

Thank you for using Xilinx programmable logic.

-- Steve Knapp
   Corporate Applications Manager
   Xilinx, Inc.



Article: 2041
Subject: Re: REPOST: Design Contest Write-up ( was "Jury Verdict + Test Benches" )
From: seningen@Ross.COM (Mike Seningen)
Date: 5 Oct 1995 12:37:21 -0500
Links: << >>  << T >>  << A >>
One of the big problems (as far as speed goes), is that most processors are hand layed out, or are comprised with
highly tuned standard cells, and the optimisation of this layout directly attributes to the speed of the devices.

Asic layout by its very "programmable" nature is parasitically poor.  The sea of gates, metal programmability layout
is sickening to a true custom ic designer.

This is not something to scoff at, we have spent a lot of time analuzing the smallest details in layout to achieve
another 5-10% performance of out std cells, jsut by tweaking the layout structure and parasitics.

I won't even begin to touch the full cmos vs "other" advantage a truly custom design has the potential to exploit.

cheers,

mike




Article: 2042
Subject: Re: REPOST: Design Contest Write-up ( was "Jury Verdict + Test Benches" )
From: jbuck@synopsys.com (Joe Buck)
Date: 5 Oct 1995 17:43:42 GMT
Links: << >>  << T >>  << A >>
ekatjr@eua.ericsson.se writes:
>[arbitrary graph deleted]
>Currently microprocessors perform about 5 times better than ASIC design ...

Sigh.  How do you justify such a statement?  What's your benchmark?  What
does this mean?

5 times better on all applications?  Then why aren't ASICs dead now?  Why
are commodity PCs filled with ASICs?  Why don't they consist entirely of
processors to do all functions?

Your statement is about as silly as "Currently, C++ performs 1.8 times
better than C" (or any version with 1.8 replaced by some other number).

>When will more and more system designers think that microprocessors outperform
>HW design and stop ASIC development?

Never.  Rather, ASICs of the future will increasingly include processor
cores to deal with the parts of the problem.

(Now, I could conceive of a world in which FPGAs got so good and dense
that *they* could cut into the market for ASICs considerably).





-- 
-- Joe Buck 	<jbuck@synopsys.com>	(not speaking for Synopsys, Inc)
Anagrams for "information superhighway":	Enormous hairy pig with fan
						A rough whimper of insanity


Article: 2043
Subject: Viewsim S/D
From: an222663@anon.penet.fi (cha)
Date: Thu, 5 Oct 1995 21:41:10 UTC
Links: << >>  << T >>  << A >>
Hello,

       I want to do a simulation during 1 second using Viewsim S/D. I use
       a .log file to do this simulation but I have a problem. If the system
       fails (sometimes this occurs), I think all the work is lost. I don't
       know if it's possible to recover some of it or if it's possible to
       save the work (one bit of it) using a command in the .log file.

	Does anyone knows the answer?

	      Please, tell me if you know a way to do this.

			      Thanks in advance,

					 Fernando.
					 TSC Dpt. UPC
--****ATTENTION****--****ATTENTION****--****ATTENTION****--***ATTENTION***
Your e-mail reply to this message WILL be *automatically* ANONYMIZED.
Please, report inappropriate use to                abuse@anon.penet.fi
For information (incl. non-anon reply) write to    help@anon.penet.fi
If you have any problems, address them to          admin@anon.penet.fi


Article: 2044
Subject: Berkeley announces Advanced Product Development Course
From: course@garnet.berkeley.edu ()
Date: 6 Oct 1995 00:14:28 GMT
Links: << >>  << T >>  << A >>
University of California, Berkeley,
Extension announces:

"ADVANCED PRODUCT DEVELOPMENT"

A series of five 1-day coordinated
courses, October 23-27, 1995 at the
San Francisco International Airport

"WHEREAS THE BATTLEGROUND OF THE '80S
WAS QUALITY, THE COMPETITIVE BATTLEGROUND
OF THE '90S WILL BE PRODUCT DEVELOPMENT"

---Red Poling, former CEO, Ford

 
This series presents an advanced approach to
product development to help you achieve:

*Lowest possible total cost
*Ultrafast time to market
*Broad market acceptance
*Rapid and efficient customization
*Responsiveness to changing market conditions

THE COURSES (October 23-27)

1.  Advanced Product Development Management
2.  Product Definition Using QFD: Quality Function Deployment
3.  Agile Product Development for Mass Customization, JIT, 
    BTO, and Flexible Manufacturing
4.  Low-Cost Product Development
5.  Design for Manufacturability


LECTURERS

DAVID M. ANDERSON, a consultant based in
Lafayette California.  He holds a doctorate in mechanical
engineering from UC Berkeley, and has more than 21 years of
industrial experience, including 13 years of consulting work. He
is the author of two books published by the Harvard University
Press:  "Design for Manufacturability: The New Product
Development Imperative" (1990) and Mass Customizing Products" (to
be published in 1996).  His next book will be "Low Cost Product
Development." Anderson has taught New Product Development in the
Management of Technology Program of the Haas School of Business
and the University of California, Berkeley.  In addition he has
given numerous corporate product development seminars
internationally, and at many companies, including several
divisions of Hewlett-Packard. 

CHARLES A. COX, instructor for Product Definition using QFD,
is a Certified Management Consultant and Certified Quality
Engineer.  An engineer by training, he has more than 17 years of
consulting experience in the management and service sectors,
working with large and small organizations to implement customer-
driven product or service design systems and procedures.  His
latest publication is "The Executive's Handbook on Quality
Function Deployment" (John Wiley, to be published in 1996).

COMPANIES MAY BUY SEATS

The fee for each one day course is $395.  The fee for the entire
five-course series is $1,495.  There is a 10 percent discount for
three or more enrollments provided they are submitted together. 
Companies may "buy seats" for the series and send appropriate
people each day.

FURTHER INFORMATION

For a free detailed brochure describing the series send your
postal address to:

course@garnet.berkeley.edu

Specify the "Advanced Product Development Series"





Article: 2045
Subject: Re: FPGA for a 20k gates micro-controller
From: fliptron@netcom.com (Philip Freidin)
Date: Fri, 6 Oct 1995 07:13:26 GMT
Links: << >>  << T >>  << A >>
In article <44b1c1$lku@ixnews6.ix.netcom.com> jsgray@ix.netcom.com (Jan Gray) writes:
Jan answers a question by Kayvon Irani about building controllers, and
describes his data path:

>It depends upon what fraction of the gates are datapath vs. control. 
>Assuming the majority implement datapath functions, and with careful
>design, contemporary FPGAs should provide adequate capacity and perhaps
>acceptable speed.
>
>For instance, the datapath of a 32-register 32-bit pipelined RISC can
>fit nicely in the "left half" (10 of 20 columns of CLBs) of a XC4010. 
>In my (paper) design, I use ten columns of CLBs, each 16 CLBs "tall";
>((FG denotes application of F and G logic function generators, FF
>denotes application of flipflops)):
>
>1. FG (as 32x1 SRAM): reg file copy "A", even bits
>2. FG (as 32x1 SRAM): reg file copy "A", odd bits
>3. FG: multiplexor (of reg file A value or result bus forward); FF:
>latch of operand "A".
>4. FG (as 32x1 SRAM): reg file copy "B", even bits
>5. FG (as 32x1 SRAM): reg file copy "B", odd bits
>6. FG: multiplexor (of reg file B value or sign extended 16-bit
>immediate from instruction register); FF: latch of operand "B".
>7. FG: logic unit (A&B, A|B, A^B, A&~B); FF: latch of "result bus". 
>The latter "write back" value is fed to the data-in ports of the two
>copies of the register file in columns 1-2 and 4-5.
>8. FG: adder (A+B, A-B); FF: PC register
>9. FG: multiplexor (of adder and incremented PC value, feeds PC
>register and MAR (from an idea from Philip Fredin)); FF: instruction
>register
>10. FG: incrementor (of PC register); FF: memory address register
>
>The "result bus" is a 3state bus driven by tbufs at the adder, logic
>unit, MAR mux (PC), and "data in" (from RHS of the 4010) columns.  For
>sign or zero extended byte and halfword loads, another column of tbufs
>drives 0s or 1s appropriately.  In addition, shift/rotate left or right
>1, 2, or 4 bits uses 6 other columns of tbufs.
>
>(This design also uses tbufs in the right half of the chip to do byte
>and halfword extraction/alignment.  For instance, for a store-byte to
>address 0x000003, the byte of data exits the datapath on bits D[7:0]
>and is copied up to D[31:24] by tbufs.)
>
>So, even half a "10,000 gate" 4010 has adequate capacity for a
>respectable RISC datapath.
>
>As for speed, 16 M instructions/s is doable in a 4010-5.  One critical
>path in the above design is from A and B operand registers, through the
>32-bit ripple carry adder, through tbufs onto the result bus, and
>through the register forwarding multiplexor back to the A operand
>register.  That's something like 3 ns + 33 ns + 10 ns + 5 ns +  <10 ns
>routing delay = ~60 ns = 16 MHz.
>
>In straight XC4010-5's, another speed issue regards the register file
>made from SRAMs.  At 16 MHz, there is 60 ns to write back a result and
>read new operand(s).  The above design uses a glitch generator approach
>to create the SRAM write pulse, a technique Xilinx does not advocate.
>
>However, with the new 4000E series parts, Xilinx introduces synchronous
>SRAMs.  Based upon the 4000E spec sheet, the -3 parts are characterized
>with a Twcts write cycle time of 14.4 ns, so you could do a write and a
>read in 25 ns (as you require for 40 MHz) or, more straightforwardly,
>30 ns/33 MHz.  If you need more speed and fewer registers, the dual
>port configuration would permit concurrent register file write and read
>access and thus 60+ MHz operation.
>
>The 4000E also seems to have doubled the speed of the fast carry logic
>and so the above 4010-5 datapath could probably run at 30-40 MHz in a
>4010E-3.  However, I don't yet have the 4000E design software, I'm
>quoting from spec sheets, I don't know if the 4000E parts are generally
>available, etc., etc.  Your mileage may vary.
>
>Jan Gray
>Redmond, WA
>// brand new parent, software developer, electronics hobbyist

It is stunning how similar Jan's datapath is to the one that I designed
in my RISC4005 and R16 cpus. Mine are for a 16 bit RISC, which takes 
about 150 CLBs in an XC4005. I have not yet had the time to make changes 
to suit the XC4005E, but when I do, it will certainly run faster.

My number one hot-button when talking to clients or teaching classes on 
using FPGAs is that everything depends on floorplanning. ( I am slightly 
biased here because most of my design have a significant percentage of 
datapath in them. )  The most valuable part of Jan's posting is that he 
has detailed his datapath floorplan. Let me now embelish it with my 
expierence of designing the R16. (the critical path that Jan has in his 
design are the same as mine. This is how I optimized it, and I believe 
that it would also help Jan's design, and should be instructive for 
others, which is why I am posting this. I will use his column 
descriptions, but re-arrange them to minimize delay paths. The floor plan 
starts on the left side in column 1.)

1. FG: incrementor (of PC register); FF: memory address register
   MAR is also duplicated in the I/O flops to the left.
2. FG: multiplexor (of adder and incremented PC value, feeds PC
   register and MAR (from an idea from Philip Fredin)); FF: PC register
3. FG (as 32x1 SRAM): reg file copy "A", even bits
4. FG (as 32x1 SRAM): reg file copy "A", odd bits
5. FG: multiplexor (of reg file A value or result bus forward); FF:
   latch of operand "A".
6. FG: adder (A+B, A-B); FF: Instruction register
7. FG: multiplexor (of reg file B value or sign extended 16-bit
   immediate from instruction register); FF: latch of operand "B".
8. FG (as 32x1 SRAM): reg file copy "B", even bits
9. FG (as 32x1 SRAM): reg file copy "B", odd bits
10. FG: logic unit (A&B, A|B, A^B, A&~B); FF: latch of "result bus". 

The main features are: B side register file is the reflection of the
A side register file, so the two sets of outputs are very near to where
they need to go for the worst case path, which is the adder. As you
can see, this is in column 6, between the two forwarding muxes and
operand registers in columns 5 and 7. As far as I can tell, this is
an optimal placement.

All the best,
	Philip Freidin.


Article: 2046
Subject: Re: REPOST: Design Contest Write-up ( was "Jury Verdict + Test Benches" )
From: ekatjr@eua.ericsson.se (Robert Tjarnstrom)
Date: 6 Oct 1995 14:54:16 GMT
Links: << >>  << T >>  << A >>
In article <HUTCH.95Oct4093017@timp.ee.byu.edu>, hutch@timp.ee.byu.edu (Brad Hutchings) writes:
>
>This is probably a bit off of the original topic but I think that
>it is misleading to look only at the *clock rates* of microprocessors
>and ASICs. An ASIC may have a much lower clock rate than a microprocessor
>but, due to specialization, concurrency, etc., may achieve a much higher
>throughput rate than the microprocessor. FPGA-based systems that
>we build around here typically outperform microprocessor systems (5x - 10x)
>yet run at only approximately 1/10 to 1/5 of the microprocessor's clock
>rate. Clock rate is only part of the story...
>
Indeed, clock rate is only part of the story. There has been an increase in
both complexity and concurrency in ASICs as well as for microprocessors. The 
properties of concurrency and customisation for ASICs can for some applications 
lead to that one ASIC can do a job which otherwise would have required several 
processors. However, if there is trend of increasing gap in frequency of operation
between ASICs and processors the number of such applications with ASIC advantage
will reasonably be reduced. Therefore, I think it is interesting to look at the 
reasons for this gap.


Robert Tjärnström




Article: 2047
Subject: Re: REPOST: Design Contest Write-up ( was "Jury Verdict + Test Benches" )
From: ekatjr@eua.ericsson.se (Robert Tjarnstrom)
Date: 6 Oct 1995 15:22:00 GMT
Links: << >>  << T >>  << A >>
In article <4515ge$mo@hermes.synopsys.com>, jbuck@synopsys.com (Joe Buck) writes:
>ekatjr@eua.ericsson.se writes:
>>[arbitrary graph deleted]
>>Currently microprocessors perform about 5 times better than ASIC design ...
>
>Sigh.  How do you justify such a statement?  What's your benchmark?  What
>does this mean?
>
>5 times better on all applications?  Then why aren't ASICs dead now?  Why
>are commodity PCs filled with ASICs?  Why don't they consist entirely of
>processors to do all functions?
>
>Your statement is about as silly as "Currently, C++ performs 1.8 times
>better than C" (or any version with 1.8 replaced by some other number).

Well, I do not see much contribution on the issue.

Constructive comments could be

1) Regarding properties.
   Power dissipation are two order of magnitude larger for processors giving a strong
   disadvantage.

2) Regarding the processor alternative.
   Inefficiency in software take away the processor advantage to some extent.

3) Regarding organisational factors.
   Many people do not design ASICs often. They have a broader competence and lacks
   the more special competence of making higher performance ICs.


Robert Tjärnström




Article: 2048
Subject: Need large vhdl codes
From: kyung@cslsun10.sogang.ac.kr (Cho Kyung Choon)
Date: 6 Oct 1995 15:30:00 GMT
Links: << >>  << T >>  << A >>

- I'm looking for a lots of large vhdl code. -

I have been develope a partition program runing on FPGA,
which place a large circuit into several FPGAs.
I feel lack of benchmark circuit for I don't design any circuit,

Is there anyone who allow me to use a circuit, written in vhdl,
designed by an angel.  ;)
or let me know where can I get some benchmark circuit. ( ftp site list ok! )

The larger, the better.

--
email : kyung@dalab2.sogang.ac.kr


Article: 2049
Subject: Re: FFT in FPGAs ?
From: Russell Petersen <russp>
Date: 6 Oct 1995 16:05:48 GMT
Links: << >>  << T >>  << A >>
jnoll@su19bb.ess.harris.com (John Noll) wrote:
>In article lpo@marina.cinenet.net, kirani@cinenet.net (kayvon irani) writes:
>> 	I'd like to know if any brave designer out there has tried to 
>> 	implement an FFT algorithm in an FPGA. Any one has experience
>> 	with Mentor/Synopsys tools that take your algorithms and output
>> 	VHDL synthesizable code?
>> 
>> 	Regrards,
>> 	Kayvon Irani
>> 	Lear Astronics
>> 	Los Angeles
>
>


I actually attempted this as part of my thesis.  I implemented an 8-bit FFT using two Altera
81188s however it did not have enough internal precision to be practical.  The main reason
for the attempt was simply to show that it was possible however.  One particularly attractive
approach that could yield very good performance would be to combine an FPGA with an external
multiplication chip (complex multiplier chip even) and then use the FPGA for the control and
accumulation.  This has the possibility of giving good results as illustrated by the
following numbers from my thesis (estimates based on back of the envelope calculations but
should be fairly accurate as I have some experience due to my actual implementation of the
8-bit FFT):

System             Precision           # of Chips            Computation Time
                                                              64pt  256pt  1024pt
TI TMS320C5x        16-bit                  1                108us 617us 3.84ms
PDSP16116/A         16-bit                  2                11.52us 61.4us .307ms

Section from my thesis describing the above numbers:

This is labeled as {\it PDSP16116/A} in the table and utilizes
the PDSP16116/A complex multiplier chip from GEC Plessey
Semiconductors.  This chip is capable of performing a 16-bit complex
multiplication in 50 ns.  The use of the custom multiplier chip 
greatly enhances the speed of the computation since the algorithm as
layed out on the FPGA is primarily compute bound rather than I/O
bound.  The limiting computational element is thus the complex multiplier.
With a 10 ns penalty for inter-chip delays added to the 50 ns
multiplication rate speedups over the DSP processor of 9.4, 10, and
12.5 are obtained for the three FFT lengths.

The primary disadvantage of this approach is the cost of the multiplication chip (around $300
or so).


_______________________________________________

     **  **   ******  Russell Petersen
    **  **   *    *   russp@valhalla.fc.hp.com
   ******   ******    voice: (970) 229-7007
  **  **   *          
 **  **   *
_______________________________________________





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search