Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 63600

Article: 63600
Subject: external sdram and gdb tool
From: Tom <t_t_1232000@yahoo.com>
Date: Wed, 26 Nov 2003 11:14:57 +0100
Links: << >>  << T >>  << A >>
Hi, 

I have some project where I store my entire program in the external
sdram (by redirecting every section in the linker script to the
sdram). When I download the program to the board, it doesn't work.
When I run the program in the debugger tool, it works. Does anybody
know an answer to this problem ?

regards, 

Tom

Article: 63601
Subject: Re: How many dedicated clock pins EP20K1500EBC652 device?
From: vbetz@altera.com (Vaughn Betz)
Date: 26 Nov 2003 03:23:28 -0800
Links: << >>  << T >>  << A >>
Hi Yi,

My answers below.


> Vaughn,
> 
> Thank you very much for your help!
> 
> I was confused when reading apex.pdf: somewhere says 8 "dedicated
> clock and input pins", somewhere says 4 "dedicated clocks". Now I know
> the difference is that 4 "fast input pins".
> 
> One question: to access the "dedicated fast resources", do I simply
> define an internal net as "global signal"? During compilation, I saw
> msgs like "promote signal XXX to global signal automatically". Does it
> mean it uses dedicated fast resources for that signal already?

Yes, if you make an assignment to a net of "Global Signal = On" you
will force it to use a dedicated global clock or fast global clock
network.  If you don't do this, Quartus will decide which signals get
the global networks.  Generally relying on the automatic global
promotion code is fine -- it basically puts the highest fanout clocks
on global networks first, followed by lower fanout clocks, followed by
asynchronous clears, until it runs out of either clock/aclr signals or
global networks.

> > 
> > The two resources aren't much different.  The dedicated clocks are
> > driven by dedicated input pins, while the fast dedicated networks can
> > be driven by bidirectional IOs or internal signals from the FPGA
> > fabric.  So most people just consider this 8 dedicated clocking /
> > asynchronous clear networks.
> 
> I just did an experiment: use pin Y34 (dedicated clock pin) to drive a
> few small modules and I see clock skew of less than 0.1 ns; then I use
> pin B19 (fast1) to drive the same modules, this time I see clock skew
> of more than 1.1 ns;  (skew observed from layout/floorplan view)
> 
> Do you think this skew will be too large for the hold-time of the
> flip-flops on fpga?

The skew you're seeing on pin B19 sounds way too large.  Pin B19 can
drive both the fast network and regular routing.  To make sure it uses
the fast network, make a "global signal = on" assignment to the output
signal of that pin.  It sounds like you're just using regular routing
right now.  If the signal you put on B19 is not a clock or
asynchronous clear, Quartus will tend to prefer to use regular
routing, rather than the Fast network, unless you make this setting.

Since pin Y34 can *only* drive the dedicated clock network, just
putting a signal there is enough to force usage of that network.

> > clk1p and clk2p aren't connected together, so you can send 4 signals
> > in through the dedicated clock pins.
> 
> I wasn't clear in my first post: the FPGA is sitting on a DSP board,
> so the clk1p and clk2p are connected. I probably will cut them apart.
> 
> > 
> > The FAST pins drive the dedicated FAST networks, which can be used as
> > another 4 clock networks:
> 
> Same concern as I mentioned above: is > 1.1ns skew too large for them
> to be used as clock networks?

Their skew should be much better than that.  Try the experiment above
to make sure you are really using the Fast networks.

Regards,

Vaughn 
Altera

Article: 63602
Subject: Re: getting started in FPGA
From: vbetz@altera.com (Vaughn Betz)
Date: 26 Nov 2003 03:43:37 -0800
Links: << >>  << T >>  << A >>
> > Howdy folks.
> >
> > I've got a recent BS in computer sytems engineering, which is a like
> > EE with some compsci mixed in.  I've used CPLDs, and really want to
> > get a good start in FPGAs so I can build my career in the 'embedded'
> > direction.
> >
> > How does one start out in fpga development given that funds are
> > limited ?
> >
> > thanks
> >
> > - moi


Hi,

http://www.altera.com/products/devkits/kit-dev_platforms.jsp lists
Altera's development kits and many 3rd party kits using Altera parts. 
Prices go from $99 to $7500.  The Stratix 1S10 based board at $395 and
the Cyclone 1C20 board at $495 look like pretty good choices to me for
someone on a budget.  They're supported by the free Quartus web
edition software.

You can go all the way down to the $99 MAX based board though if you
really want to keep costs down.

If you're still a university student, you can get a UP2 board which
has a 10K70 device along with Quartus and MaxPlus2 included, for $149
US.  See http://www.altera.com/education/univ/kits/unv-kits.html for
details.  I would strongly recommend you learn Quartus rather than
MaxPlus2 though -- while both support the 10K, Quartus is more
powerful and you will learn more using it.

Vaughn

Article: 63603
Subject: Re: graphic card accelarator vs. FPGA: which is better for the following task?
From: news@sulimma.de (Kolja Sulimma)
Date: 26 Nov 2003 05:23:38 -0800
Links: << >>  << T >>  << A >>
"walala" <mizhael@yahoo.com> wrote in message news:<bpjug7$1so$1@mozo.cc.purdue.edu>...
> Dear all,
> 
> I guess this is a ray-tracing problem... But I need to do this task in as
> high as possible speed/throughput. Here is my problem:
> 
> Suppose I am given 25 rays and I am given a 3D cube and all parameters of
> these rays and cube are given...
> 
> I need to compute the length of the intersecting segment of the rays with
> this cube as fast as possible. If some rays completely fall outside of the
> cube, then it outputs 0, otherwise gives the length.

Let the cube be given by three normal vectors n1 to n3 and six points
p1 to p6 on the six planes.
(Actually you can use the same point multiple times)
Assume your rays start in the origin and are given by a vector r of
length 1.
Then the interscetions with the first plane happens at a distance d of
d1= n.p1/(n1.r)= n.p1 * (1/(n1.r))
See http://geometryalgorithms.com/Archive/algorithm_0104/algorithm_0104B.htm#Line-Plane%20Intersection
Then you order the planes according to d.
If the ray does not cross the three front planes first, the cube is
missed, otherwise the difference between the fourth and the third
distance is the length of the intersection.

So for each ray you get three devisions, four multiplications and a
couple of minmax cells.
(Many ore optimizations due to symmetries possible.)

With integers you should be able to do that in a small Spartan-III in
a pipeline a lot faster than you can get data into the chip.

With floating point numbers it should be still very fast in an FPGA,
but the design gets a lot more complicated and larger.

Have fun,

Kolja Sulimma

Article: 63604
(removed)


Article: 63605
Subject: Re: 5V I/O with 1.8V Core
From: Austin Lesea <austin@xilinx.com>
Date: Wed, 26 Nov 2003 07:36:30 -0800
Links: << >>  << T >>  << A >>
Jim,

And what makes you think that we have not already done everything you 
have mentioned?

Serious issue:  no future solution on the horizon.  Hope someone out 
there is working on a solution right now.

Austin

Jim Granville wrote:
> "Austin Lesea" wrote
> 
>>Jim,
>>
>>The strained silicon makes for a better Idsat, so they can then up the
>>Vt, to lower the leakage, or leave the Vt low, so they can get the speed
>>they want.
>>
>>NO ONE has a clue how to solve the leakage issue.  Not even close.
>>Massive "head in the sand" approach in the industry just now beginning
>>to shake folks up to where they are beginning to really look at it....
> 
> 
> - that's what makes it interesting to follow :)
> 
> The best solutions will 'retro-fit' onto the massive FAB equipment
> investments,
> but there are also design-time solutions.
> Things like Power Route Switching (brute force), or variable oxide
> and/or variable threshold across the die ( more finese, needs process
> support )
> etc..
> 
>  In a FPGA, there could be future scope for standby style design, with a
> LOGIC.Core
> Vcc, and separate BitStreamLatches.Vcc, allowing the speed-tuned LOGIC to be
> powered off, but the SRAM config info could be held.
>  Gives designers the choice of faster wakeup, from a very low power mode.
> 
>  Meanwhile, designers could deploy the emerging larger FLASH, low power uC
> devices (like my example LPC21xx ) as 'smart-loaders' - tasked to
> power-remove,
> and then re-load (compressed & secure) the FPGA info when needed.
> 
> -jg
> 
> 


Article: 63606
Subject: Re: Input pins without Vcco supply-- Virtex-II
From: Austin Lesea <austin@xilinx.com>
Date: Wed, 26 Nov 2003 07:43:06 -0800
Links: << >>  << T >>  << A >>
Jay,

By using the banks that are unpowered as inputs, you are in effect 
powering up the bankl by reverse biasing the protection structures 
(diodes) that are part of the pmos output transistor stack (basically, 
there is no separate diode, it is the junction of the pmos transistor 
itself).

That said, the bank requires ~ 2mA to power on (as long as it doesn't 
have to drive anything) so a bunch of inputs toggling effectively powers 
up the bank....

Doesn't hurt anything at all.  Can't say that we meet all specifications 
for timing, etc. but you are certainly able to function.

In fact, the clock inputs are no different than any others, so you are 
powering up the bank from those alone.

If you really want to power up the bank for test reasons, I would 
program an output to be a "1" with the PCI IO standard (one of the 
strongest) and themn connect that pin to 3.3V.  That is good for ~ 60 mA 
of IO current without concern.  Need more outputs?  Parallel up a bunch 
of IOs as "1" to 3.3V.


Austin

Jay wrote:

> Hi all,
> The factory have made some mistakes when they had our V-II pcb board
> manufactured and assembled. and we found only the Vccint, Vccaux and Vcco4
> is available for the FPGAs.
> To save time, we still want to do some debugging on this board before we can
> get our new board.
> Thanks god that with these 3 Vcc supplies we can download our design through
> JTAG, and later I found input signals of other banks without Vcco still can
> be used.(at least the GCLK, I've not tried other pins yet).
> so my question is: can anyone confirm that I really can use the input pins
> without Vcco. and how about its Electrical Characteristics, ?V tolerant etc.
> 
> 


Article: 63607
Subject: Re: How many dedicated clock pins EP20K1500EBC652 device?
From: enq_semi@yahoo.com (enq_semi)
Date: 26 Nov 2003 07:45:08 -0800
Links: << >>  << T >>  << A >>
Hi, Vaughn,

> 
> Their skew should be much better than that.  Try the experiment above
> to make sure you are really using the Fast networks.
> 

I made sure it should use the Fast networks:

1. In assignment manager I have the following settings:
   -- aclk_in  => Clock Settings = my_clk @ 56MHz
   -- aclk_in  => Global Signal = Global Clock
   -- aclk_in  => Auto Global Clock = On

2. During compile, I saw this message:
   --Info: Promoted cell aclk_in to global signal automatically

I doubted that the timing parameters used by Quartus might not be
accurate, however, I verified two skews on oscilloscope against the
timing report by Quartus, the difference is very small (around 0.2 ns
or so).

I also tried non-clock and non-fast I/O pin, the skew is much larger.
So, maybe the 1ns skew with fast I/O pin is considered "useable as
clock"? (I certainly NOT hope so.)

Thank you very much for your help!

Yi

Article: 63608
Subject: Re: running from external memory (microblaze)
From: "Frank" <someone@work.com>
Date: Wed, 26 Nov 2003 17:15:08 +0100
Links: << >>  << T >>  << A >>
Thank you for your information, I am busy with getting XMK working and building an application with his own makefile (which specifies another start address as the bootloader). At this moment one thing is not clear to me: how are interrupts handled? The microblaze jumps to address 0x18, but in my case that's the bootloader in block ram. And I want to handle the interrupts in my application of coarse. Is that possible and if so, how?

TIA,
Frank

  "mohan" <mohan@xilinx.com> wrote in message news:3FC39566.9B1CF116@xilinx.com...
    
  Frank wrote: 

    Hello, 
    I have build a bootloader which is located in block ram. Now I want to 
    download my final application to sdram and execute it. If I'm correct, I've 
    to make a linker script in order to make this possible.

  Not necessarily. You can simply specify a different start address for your boot loader and your application on the mb-gcc command line. See the Makefile_mb.sh and other Makefile*.* files related to MicroBlaze in the Xilkernel install area. 
    Besides this, I want 
    to use the xilkernel in my application, but not in the bootloader, is this 
    possible?
  Yes. 
    I guess I have to convert the application.elf file to a binary 
    file in order to be placed into sdram by the bootloader?!
  Depends on your bootloader. If your bootloader expects binary then you have to. I have seen bootloaders  that use other formats as well, such as SREC or some application specific encoding. 
    How are the 
    interrupts handled? Is the interrupt handler from the bootloader used 
    (because it default jumps to address 0x18)? Can I install a new interrupt 
    handler which is located in my application?! 
    a lot of questions, I searched at the forums, but there are not much 
    examples available. I'm sure there a people who already did this before. 
     

  There are some examples of Xilkernel usage in the install area itself (search for print_thread.c). 
     
    Please help, 
    thanks
    
    


Article: 63609
Subject: IDE Ultra DMA on a SPARTAN II
From: steven derrien <steven_derrien@yahoo.fr>
Date: Wed, 26 Nov 2003 17:39:00 +0100
Links: << >>  << T >>  << A >>
Hi folks,

In the context of a research project, we are currently
working on a IDE Hard drive interface targetting the
Spartan-II/Virtex architecture.

On our prototype we simply used PIO mode, but we would
like to extend our controller so that it can handle UDMA.

The problem is that we just realized that this protocol is
somewhat tricky, more specifically we are concerned on how
to implement "source synchronous" aspects of the protocol
expecially for read operation (see below).

        _______________   ____________   __________________ 

                       \ /            \ /
Data     ???           X  Valid Data  X      ?????? 

        _______________/ \____________/ \________________ 

			<------------>
			      5 ns

                                 _________________
DMARdy                         /                 \
        _______________________/                   \________
	

DMARdy and DATA are both sourced by the hard drive side.

According to the IDE/ATA spec (or at least what I understood
from it) the data should be sampled on a rising edge of
DMA_Rdy.

To us, the most simple solution would be to use the DMARdy
as a clock signal to sample the data in the IOB registers,
then re-synchronize to the FPGA system clock with another
register. However, we are not sure that this is a safe approach
with respect to signal integrity.

Besides, we realized (too late, of course) that the DMARdy
pin of the IDE drive is not connected to a GCK pin ...

Would anybody have somme idea on how to solve this issue ??

Thank you very much in advance.

Steven



Article: 63610
Subject: Re: Xilinx Microblaze SDRAM burst access
From: steven derrien <steven_derrien@yahoo.fr>
Date: Wed, 26 Nov 2003 17:45:37 +0100
Links: << >>  << T >>  << A >>
Hi Dirk,

Dirk Ziegelmeier wrote:

> Hello Group,
> 
[ ...  ]


> Memcopy delivers unsatisfactory performance, an analysis shows that
> the main reason is that the SDRAM controller does not perform burst
> accesses to the RAM. The memcopy routine is quite optimal, it consists
> of four consecutive read and four write instructions, so bursting
> should work in theory.

I might be wrong, but I doubt that the microblaze bus bridge is smart 
enough to detect and organize burst directly from the cpu execution 
flow. I would rather think that the burst access can only occur when you 
have a cache (cache exchange data with memory line by line, and each 
line transfer generates a burst).

> 
> Is there anything more I need to do to get bursts to work? I already
> spent two days reading manuals and FAQs trying to identify the
> problem, but no success.
> 
> TIA,
> Dirk


Article: 63611
Subject: Re: external sdram and gdb tool
From: Ryan Laity <ryan_dot_laity@x-i-l-i-n-x_pleasenospam_dot_com>
Date: Wed, 26 Nov 2003 10:01:29 -0700
Links: << >>  << T >>  << A >>
Hi Tom,

How are you populating the SDRAM with your .elf file when not using the 
debugger? Perhaps you're doing this properly, but you didn't mention it 
here so I have to ask.  When downloading the .bit file via iMPACT, the 
only .elf data that you can possibly have is that in the Block RAM.  If 
you're not moving the .elf into SDRAM then that's the problem; you will 
need a bootloader of some sort to move the data from a static location 
(Flash, etc.) into SDRAM.

If you are already loading the .elf into SDRAM, then use the XMD tool to 
check the validity of the .elf file in memory.  What I typically do is 
run an object dump (either mb-objdump or powerpc-eabi-objdump) on the 
.elf file (I typically use the -S option) and pipe that out to a text 
file.  Next, connect to the device via XMD (ppcconnect or mbconnect) and 
do mwr's from the base address of your SDRAM.  If you do something like 
mwr 0xF0000000 20, it will dump the 20 addresses after 0xF0000000 
(obviously change this to match your SDRAM base address) and you can 
check them against the .elf file.  The boot section of the .elf is at 
the bottom of the file so look there to start.  This will check that 
your bootloader is doing its job properly and that your system is able 
to read from the static location (Flash, etc.) properly (we already know 
that it can write to SDRAM properly because it works with the debugger).

I hope this information helps.  If not, please post a follow up or open 
a case with Support.


Ryan Laity
Xilinx Applications


Tom wrote:

> Hi, 
> 
> I have some project where I store my entire program in the external
> sdram (by redirecting every section in the linker script to the
> sdram). When I download the program to the board, it doesn't work.
> When I run the program in the debugger tool, it works. Does anybody
> know an answer to this problem ?
> 
> regards, 
> 
> Tom


Article: 63612
Subject: Re: area constraints
From: Steve Lass <lass@xilinx.com>
Date: Wed, 26 Nov 2003 10:06:28 -0700
Links: << >>  << T >>  << A >>
A.y wrote:

>Steve Lass <lass@xilinx.com> wrote in message news:<bq017p$ha32@cliff.xsj.xilinx.com>...
>  
>
>>A.y wrote:
>>    
>>
>>>In xilinx floorplanner or manualy (using ise 5.1) 
>>>1. Can i assign flexible area constraints to individual modules in a design.. 
>>>i mean, can i say a module shud fit in this much area of some shape .. 
>>>without specifying absolute slice locations in area group ? .. and 
>>>
>>>      
>>>
>>You can specify that the module(s) should be grouped together, but you 
>>can't specify the size and shape
>>unless you give it a location.
>>
>>    
>>
>hello, 
>thank you very much for the reply.
>could this have been be a useful facility ?
>
Absolutely.

>if yes will it be available in future releases ?
>
Yes, but it might take a year or two to finish.

Steve

>
>  
>
>>>2. can structures other than rectangle be specified ? 
>>>
>>>      
>>>
>>You can put multiple rectangles together to form T, L, U, etc. shapes.  
>>PACE is the easiest way to
>>create these area groups.
>>
>>Steve
>>    
>>
>
>thank you very much
>ay
>  
>


Article: 63613
Subject: Re: Reverse engineering an EDIF file?
From: SNIPrf_man_frTHIS@yahoo.com (Frank Raffaeli)
Date: 26 Nov 2003 09:14:56 -0800
Links: << >>  << T >>  << A >>
> 
> Rastislav Struharik wrote:
> > Hello,
> > 
> > I would like to know does anyone knows, is it possible to reverse
> > engineer an edif netlist file? I am currently developing an FPGA core.
> > I would like to supply an evaluation version of the core, that would
> > have all the functionality of the final core, but would operate only
> > for a limited period of time. My fear is that there is a way to modify
> > the evaluation version edif netlist (find and remove modules that set
> > a time limit to the operation of the evaluation version), and thus
> > obtain completely functional core. Can something like this be done, or
> > am I being paranoid?
> > Every help and clarification on this subject is most welcome.
> > 
> > Thanks in advance,
> > Rastislav Struharik

Jim Lewis <Jim@SynthWorks.com> wrote in message news:<3FB017CB.6080109@SynthWorks.com>...
> Sorry.  An EDIF file should be pretty straight forward to
> reverse engineer.  I have had to edit one before.  You could
> make it difficult by obfusciating it.  One method changes
> all names to be sequences of letters O and l and numbers
> 0 and 1.  This would not deter those that are determined
> though.  If there is money involved, people are determined.
> 
> You would be better off working with a good legal agreement
> and people who you can trust to abide by it.
> 
> Cheers,
> Jim

One of my assignments 10 years ago was to develop a routing engine for
Lattice Semi's ISP PLD's. It invoved parsing and EDIF file into logic
equations. Jim is right, it would be easy to reverse engineer.

The best advice I ever got about legal agreements was this:
"Get to know who you're dealing with. There is no document that can
make it worthwhile to deal with someone who is dishonorable."

If you can protect the design by making it difficult and expensive to
reverse engineer ... that may be time and money well spent. It also
gives you and your clients a better competitive advantage.

Frank Raffaeli
http://www.aomwireless.com/

Article: 63614
Subject: Re: Slightly unmatched UART frequencies
From: Philip Freidin <philip@fliptronics.com>
Date: Wed, 26 Nov 2003 17:23:46 GMT
Links: << >>  << T >>  << A >>
On Tue, 25 Nov 2003 07:20:58 -0800, "juergen Sauermann" <juergen.sauermann@t-online.de> wrote:
>Philip, 
>after thinking about the problem once more, I hate to admit that, yes, you are right. 

Sorry.

>I still do not believe, though, that inserting idle time one way or the other
>(including cutting the transmitter's stop bit) is a solution. Consider the following: 
>
>Left side: Slow (9600 Baud) 
>Right side: Fast (9700 Baud) 
>
>Both sides use e.g. 8N2 for Tx and 8N1 for Rx. 

The extra stop bit gives you about 9% margin, the difference between
9600 and 9700 is about 1%

>At some point in time, Left see's its buffer filling up and hence skips
>a few stop bits here and there (using 8N1) in order to compensate this.
>Left is now faster that Right, despite of the clock rates.

I agree. More exactly, Left's RX is faster than Right's TX.

The examples I gave in my prior post help with unidirectional
messages between a faster transmitter and a slower receiver, and
assume the system at the slower receiver can process the received
character in the local clock domain in 10 bit times. But if you
retransmit the character with 2 stop bits in the slower clock
domain, that takes 11 bit times, and the system will fail. So
retransmitting with 11 bits throws away the advantage of the
RX using 1 stop bit and the far end TX using 2 stop bits.

If you knew which system had the slower clock, you could set its
transmitter for 1 stop bit and then the system would work.
Unfortunately this is not normally possible.


>As a consequence, Right sees its buffer filling up and skips
>stop bits (using 8N1) as well.
>This continues until both sides transmit with 8N1 all the time;
>at this time Left will loose data. 

This is not what I intended. I am assuming that the number of
stop bits is fixed, and is dependent on which end has the faster
clock.

Try this:

Left side: Slow (9600 Baud)
           RX: 1 + 8 + 1
           TX: 1 + 8 + 1

Right side: Fast (9700 Baud) 
           RX: 1 + 8 + 1
           TX: 1 + 8 + 2

I believe you can send stuff continuously all day this way without
over-run, in both directions. Only problem is that you need to know
which end has the faster clock.

You could figur this out by running both TX at 1+8+1 and see which
RX has over-run errors first, then adjust the other end's TX. Not
pretty, but could be made to work. Maybe OK for a 1 off project, but
not for production.


>Thus, there must be some kind of understanding between Left and Right,
>which of the two is the "clock master", that ultimately controls the
>transmission speed. Unfortunately this is sometimes not possible, for
>instance in symmetric configurations.

I agree. hence the need for flow control.

>/// Juergen

Philip



Philip Freidin
Fliptronics

Article: 63615
Subject: Re: running from external memory (microblaze)
From: mohan <mohan@xilinx.com>
Date: Wed, 26 Nov 2003 10:24:17 -0800
Links: << >>  << T >>  << A >>
MicroBlaze always jumps to 0x8 or something like that on interrupt. 
The initialization code for your application writes the address of your 
interrupt handler into this low memory so that when an interrupt occurs, 
MicroBlaze jumps to the fixed low address and from there to your 
interrupt handler. 
Frank wrote: 

At this moment one thing is not clear to me: how are interrupts handled? 
The microblaze jumps to address 0x18, but in my case that's the bootloader 
in block ram. And I want to handle the interrupts in my application of coarse.



Article: 63616
Subject: Re: 5V I/O with 1.8V Core
From: Peter Alfke <peter@xilinx.com>
Date: Wed, 26 Nov 2003 11:21:16 -0800
Links: << >>  << T >>  << A >>
Source-to-drain leakage is caused byoff- transistors not completely
turned off.
This is caused by a Vcc/Vthreshold ratio that is not ideal.
Fundamentally, it is a trade-off between significantly higher leakage
current at significantly higher speed, vs. both of these parameters
significantly lower. There is no magic trick (at least not yet, and not
even on the horizon).

And for most (not all!) FPGA applications, speed is king, and leakage
current is whatever it is. 
This may change one day...

Peter Alfke
========================
Tullio Grassi wrote:
> 
> On Tue, 25 Nov 2003, Austin Lesea wrote:
> 
> > THE problem today  however is drain source leakage.
> 
> Out of curiosity there is a design approach that completly kill
> the Source-to-drain leakage; it's used in radiation environments.
> Unfortunatly all other performances are greatly reduced.
> It's published in Nucl. Instrum. Methods Phys. Res., A 439 (2000) 349-60
> 
> http://weblib.cern.ch/cgi-bin/ejournals?publication=Nucl.+Instrum.+Methods+Phys.+Res.,+A&volume=439&year=2000&page=349

Article: 63617
Subject: Re: IDE Ultra DMA on a SPARTAN II
From: noone <>
Date: Wed, 26 Nov 2003 12:13:58 -0800
Links: << >>  << T >>  << A >>
ultra dma? what spec are you refer to? assume you talking about ata6/ultra dma-100 . 
if its true, dada is clocked by hstrobe or dstrobe during ultra dma, while ddmardy or 
hdmardy are for control. 
you may download the spec from t13 web site and digest it, man, they talk alot in there.



Article: 63618
Subject: IDE Ultra DMA on a SPARTAN II (corrected version)
From: steven derrien <steven_derrien@yahoo.fr>
Date: Wed, 26 Nov 2003 22:40:24 +0100
Links: << >>  << T >>  << A >>
Hi folks,

In the context of a research project, we are currently
working on a IDE Hard drive interface targetting the
Spartan-II/Virtex architecture. One important point is
that the drive is not connected through a ribbon cable
but rather directly to the Board through a connector
(see : http://www.irisa.fr/symbiose/people/lavenier/RDisk/ )

So far we simply used PIO mode, but we would like to
extend our controller so that it can handle Ultra DMA.

The problem is that we just realized that this protocol is
somewhat tricky, more specifically we are concerned on how
to implement the "source synchronous" side of the protocol
expecially for read operation (see below).

        ________   ____________   ________   __________
                \ /            \ /        \ /
Data            X  Valid Data  X  ??????  X  Valid data
        ________/ \____________/ \________/ \_________

                         | _______________________ |
DStrobe                 |/                       \|
        ________________/|                         |\________
                         |                         |

DStrobe and DATA are both sourced by the hard drive side.

According to the ATA-6 spec (or at least what I understood
from it) the data should be sampled on a rising and falling
edge of DStrobe.

We believe that the most simple solution would be to use DStrobe
as a clock signal to sample the data (for both edges)in
some registers, then re-synchronize these registers with
the FPGA system clock with another register stage.

However, we are not sure that this the best (or even a good
approach). Besides, we realized (too late, of course) that
the DStrobe pin of the IDE drive is not connected to a GCK
pin of our FPGA, making it probably difficult to use this
signal as a clock ...

Would anybody have somme hints and or advices on how to tackle
this issue ??

Thank you very much in advance.

Steven Derrien



Article: 63619
Subject: Re: IDE Ultra DMA on a SPARTAN II
From: steven derrien <steven_derrien@yahoo.fr>
Date: Wed, 26 Nov 2003 22:42:08 +0100
Links: << >>  << T >>  << A >>
 > ultra dma? what spec are you refer to? assume you talking about
 > ata6/ultra dma-100 . if its true, dada is clocked by hstrobe or dstrobe
 > during ultra dma, while ddmardy or hdmardy are for control.

Sorry, I really should have double checked befor posting.
It is indeed the dstrotre signals that I should have
refered to. I have cancelled my previous message and posted
a more accurate version(I least I hope it is so)

> > you may download the spec from t13 web site and digest it, man, they 
> talk alot in there.
> 

I have looked to the spec, but they don't really answer my questions.

Regards

Steven


Article: 63620
Subject: Re: what is the fastest speed that FPGA deals with CPU?
From: news@sulimma.de (Kolja Sulimma)
Date: 26 Nov 2003 13:59:29 -0800
Links: << >>  << T >>  << A >>
"walala" <mizhael@yahoo.com> wrote in message news:<bq0jie$kv2$1@mozo.cc.purdue.edu>...
> Dear all,
> 
> Is PCI the only convinient interfacing unit that talks with CPU by inserting
> something into a computer conviniently? What is the speed of that? Is there
> any faster method?
PCI is the only relatively fast interface that is available on almost
any computer for internal extensions and has good operating system
support for hardware drivers. The usual 32-Bit, 33MHz PCI can provide
133MB/s in theory of which about 90MByte/s are usable without to much
effort for writes, considerably less for reads. There are also 66MHz
and/or 64-Bit variants available on more expensive mainboards.

But there are lots of alternatives.
There are faster "PCI-like" interfaces like AGP, PCI-X, PCI-Express
(only possible with some FPGAs).
You can also connect your hardware via ATA or SCSI busses. That's
about the speed of PCI. For data aquisition tasks you can make your
hardware look like a tape drive so that you do not need to write a
driver and readout software. Just use the tar command for that.

Probably the fastest interface inside a PC is a memory slot. But
getting OS support for your device in this case is not straight
forward. But there are a couple of GByte/s available there.

And then you can use all the external interfaces: FireWire, USB,
Ethernet, ...

Kolja Sulimma

Article: 63621
Subject: Re: Where and How to get Nvidia Geforce 5600 public desigh graph
From: "Ken Ryan" <>
Date: Wed, 26 Nov 2003 14:07:51 -0800
Links: << >>  << T >>  << A >>
Last I heard Quantum3D was NVidia's channel for embedded applications. 
They're very nice people to work with, but be prepared to sign an NDA 
and hand them a VERY large check before getting anything. 
(if you're doing this for hobby or school, forget it). 

ken 



Article: 63622
Subject: Re: Soft-core processor construction
From: do_not_reply_to_this_addr@yahoo.com (Sumit Gupta)
Date: 26 Nov 2003 14:24:20 -0800
Links: << >>  << T >>  << A >>
Larry Doolittle <ldoolitt@recycle.lbl.gov> wrote in message news:<slrnbs8ca0.jk4.ldoolitt@recycle.lbl.gov>...
> In article <3FC438D4.542D2E67@acm.org>, Thad Smith wrote:
> > sai a wrote:
> > 
> >> 2. How do you go about building [a soft-core processor]?
> > I suppose you either design your own with standard logic components or
> > license an existing a design.  I believe there is one processor design
> > with some sort of free licensing.
> > 
> > I added comp.arch.fpga to the group list.  This should bring in some
> > more knowledgeable folk.
> 
> Go read http://www.fpgacpu.org/links.html .  There are a ton of
> free and educational soft cores from which to learn.  If, after
> studying this material for a while, you have more questions, by
> all means come back to comp.arch.fpga.
> 
>   - Larry


Look at http://www.c-nit.net/

Sumit

Article: 63623
Subject: Re: 5V I/O with 1.8V Core
From: John Williams <jwilliams@itee.uq.edu.au>
Date: Thu, 27 Nov 2003 08:48:09 +1000
Links: << >>  << T >>  << A >>
Hi Peter,

Peter Alfke wrote:
> And for most (not all!) FPGA applications, speed is king, and leakage
> current is whatever it is. 

 > This may change one day...

I certainly hope so.  There are lots of very interesting things that 
FPGAs could do in handheld consumer multimedia applications, but power 
consumption kills them in that market.  That's what PACT and Quicksilver 
are betting on...

Regards,

John


Article: 63624
Subject: Re: what is the fastest speed that FPGA deals with CPU?
From: "walala" <mizhael@yahoo.com>
Date: Wed, 26 Nov 2003 17:50:46 -0500
Links: << >>  << T >>  << A >>

"Kolja Sulimma" <news@sulimma.de> wrote in message
news:b890a7a.0311261359.2e99abde@posting.google.com...
> "walala" <mizhael@yahoo.com> wrote in message
news:<bq0jie$kv2$1@mozo.cc.purdue.edu>...
> > Dear all,
> >
> > Is PCI the only convinient interfacing unit that talks with CPU by
inserting
> > something into a computer conviniently? What is the speed of that? Is
there
> > any faster method?
> PCI is the only relatively fast interface that is available on almost
> any computer for internal extensions and has good operating system
> support for hardware drivers. The usual 32-Bit, 33MHz PCI can provide
> 133MB/s in theory of which about 90MByte/s are usable without to much
> effort for writes, considerably less for reads. There are also 66MHz
> and/or 64-Bit variants available on more expensive mainboards.
>
> But there are lots of alternatives.
> There are faster "PCI-like" interfaces like AGP, PCI-X, PCI-Express
> (only possible with some FPGAs).
> You can also connect your hardware via ATA or SCSI busses. That's
> about the speed of PCI. For data aquisition tasks you can make your
> hardware look like a tape drive so that you do not need to write a
> driver and readout software. Just use the tar command for that.
>
> Probably the fastest interface inside a PC is a memory slot. But
> getting OS support for your device in this case is not straight
> forward. But there are a couple of GByte/s available there.
>
> And then you can use all the external interfaces: FireWire, USB,
> Ethernet, ...
>
> Kolja Sulimma

Hi, Kolja,

Thanks a lot for your help!

Can you compare in a little more detail of those internal or external
interfaces?

For example, USB vs. PCI? Or PCI-X vs. FireWire?

I guess USB is not as fast as PCI, right?

Thanks a lot,

-Walala





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search