Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 127275

Article: 127275
Subject: Re: Why the core dynamic power isn't 0 when the toggle rate is 0
From: Rebecca <pang.dudu.pang@hotmail.com>
Date: Sun, 16 Dec 2007 15:58:09 -0800 (PST)
Links: << >>  << T >>  << A >>
Thank you all very much for your detailed explanation. I know there is
quite power consumption of the static power in the modern FPGAs.  I am
talking the dynamic power only here.  Austin points that when toggle
rate is 0, except power consumption of the clock tree, some internal
logic, for example, the internal logic in a FF, will still change when
clock toggled and induce power consumption.  However, this case
doesn't have much sense.
I am asking this question because I am considering the design based on
coarse-grained function blocks, where some blocks may be active and
others may be idle. I was thinking to use the power consumption at
togglerate=0 for those idle parts. In current FPGA, is there any
technology to support a power-efficient solution for this case? For
example, if the clock for each of the coarse-grain function block is
provided via a DCM, can a particular clock be disabled and then no
dynamic power will be consumed for the idle parts?

Thanks again,
Rebbeca

Article: 127276
Subject: Re: Why the core dynamic power isn't 0 when the toggle rate is 0
From: Rebecca <pang.dudu.pang@hotmail.com>
Date: Sun, 16 Dec 2007 16:00:39 -0800 (PST)
Links: << >>  << T >>  << A >>
Thank you all very much for your detailed explanation. I know there is
quite power consumption of the static power in the modern FPGAs.  I am
talking the dynamic power only here.  Austin points that when toggle
rate is 0, except power consumption of the clock tree, some internal
logic, for example, the internal logic in a FF, will still change when
clock toggled and induce power consumption.  However, this case
doesn't have much sense.
I am asking this question because I am considering the design based on
coarse-grained function blocks, where some blocks may be active and
others may be idle. I was thinking to use the power consumption at
togglerate=0 for those idle parts. In current FPGA, is there any
technology to support a power-efficient solution for this case? For
example, if the clock for each of the coarse-grain function block is
provided via a DCM, can a particular clock be disabled and then no
dynamic power will be consumed for the idle parts?

Thanks again,
Rebecca

Article: 127277
Subject: Re: How do you initialize Xilinx ISOCM memory using DCR interface
From: David Hand <hand@no-spam-corestar-corp.com>
Date: Sun, 16 Dec 2007 20:32:26 -0500
Links: << >>  << T >>  << A >>
Peter,

Peter Ryser wrote:
> Dave,
> 
>  > In AR# 19804 at http://www.xilinx.com/support/answers/19804.htm, the
> 
> you need to look at OCM like caches in a Harvard architecture, i.e. 
> dedicated blocks for data and instruction accesses. Like it is not 
> possible during program execution to read data from the instruction 
> cache it is not possible to read data from the instruction cache. IOCM 
> is actually a little bit worse because the data (code) gets in there 
> through the debugger or a program like you have written and not through 
> a processor initiated load operation, i.e. data loaded by the processor 
> through a load operation can never end up in the instruction cache.
> 
> For OCM that means that the compiler and linker strictly need to 
> separate instruction and data sections. Now, the newlib libraries are 
> written something like this:
>     .text
> data:
>     .long 0xdeadbeef
> code:
>     lis r2,     data@h
>     ori r2, r2, data@l
>     lwz r3, 0(r2)
> 
> Instruction and data are mixed within a .text segment. When the debugger 
> or your program loads that code into ISOCM the "data" variable becomes 
> inaccessible to the "code" because of the Harvard nature of OCM.

That part makes sense. It's clear I can't load newlib code into ISOCM. 
The part I don't understand is why I can't make a call from code that is 
in ISOCM to a newlib function that is left in say SRAM. In other words, 
am I allowed to do this:

void MyISOCMFunction() // Function in ISOCM
	{
	c = getchar();	// getchar() in usual place (say SRAM).

	}

> 
>> AR# 19099, at http://www.xilinx.com/support/answers/19099.htm, seems to 
> 
> This problem only occurs on the DOCM and not on the IOCM. The best way 
> to work around this problem is to turn off interrupts and exceptions 
> before accessing data in the DOCM. If that does not work, then, yes you 
> have to write the routines that access the DOCM in assembler or 
> disassemble the compiler generated code and inspect it so that the 
> instruction sequence doesn't occur.



Thanks

Article: 127278
Subject: sampling error between 2 clocks
From: wxy0624@gmail.com
Date: Sun, 16 Dec 2007 19:56:28 -0800 (PST)
Links: << >>  << T >>  << A >>
Xilinx V4SX35
ISE 8.2.03
Modelsim


I got CLKI(300MHz), CLKI_DIV(150MHz) generated through a counter(just
a flip_flop) clocked by CLKI, both clocks connect to BUFG. Then I use
CLKI to  sample data generated byCLKI_DIV(width=160bit), simulation
result in some warnings which said setuptime is not enough during
sampling. How can  I constraint PAR to get enough setuptime?

Because of funtion request, I can not use DCM and OSERDES. The minimum
delay between risingedge of CLKI_DIV and CLKI is much more than the
period of CLKI. I have to make sure all simultaneous data sampled by
CLKI simultaneously. But actually, there always some bits sampled a
period(CLKI) later or earlier. I can constraint the max delay from the
last 150MHz flip-flop to the first 300MHz flip-flop, but how can I
constraint the minimum delay?


Thank you!!

Article: 127279
Subject: Re: DDS generator with interpolated samples for Spartan3E development
From: Chris Maryan <kmaryan@gmail.com>
Date: Sun, 16 Dec 2007 20:26:04 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 14, 3:00 am, Frank Buss <f...@frank-buss.de> wrote:
> Brian Davis wrote:
> > Anyhow, if Frank can afford multiple cycles per output sample,
> > the most compact implementation would probably be a word-serial
> > CORDIC rotation, which doesn't need any multipliers at all, just
> > three accumulators, shift muxes, and a small arctan lookup table.
>
> >  I thought that had been mentioned earlier, but I don't see it
> > in re-reading the thread tonight.
>
> There was another thread last month in this newsgroup, with this link:
>
> http://www.ht-lab.com/freecores/cordic/cordic.html
>
> Maybe I'll use this for the low frequency (<200kHz) generator and a lookup
> table for high frequency and arbitrary waveforms. The "Direct Digital
> Frequency Synthesizers" book is shipped, I'm sure there are some other
> interesting ideas and implementation hints in the book.
>
> --
> Frank Buss, f...@frank-buss.dehttp://www.frank-buss.de,http://www.it4-systems.de

You might also look at oscillator algorithms.
Clay Turner has some recursive oscillator info that's fairly good:
http://www.claysturner.com/dsp/
The second paper has a lot of good info. Amplitude stabilization as he
discusses it is crucial.

It's not as fast as an LUT, but it's pretty flexible.

Chris

Article: 127280
Subject: Ethernet data rates using Spartan-3 FPGA
From: tmpstr <tmpstrbox@yahoo.com>
Date: Sun, 16 Dec 2007 21:03:42 -0800 (PST)
Links: << >>  << T >>  << A >>
Hi,
 I am trying to build a device that will need to move huge amounts of
data (~500 MB) from a device under test to PC.
 The current device uses USB 2.0 to transfer data.
 As the device will now be located at a distance from the people using
it ( in a lab 200 mts away or in another city) ETH interface (100 Mbps
and above) is being planned.
 The device will be built in small quantities ~100 or so, therefore
onetime cost also needs to be low.
 I am planning to use a Spartan-3 (XC3S1200E or so) + (softcore +
TCPIP stack)+ MAC + external PHY
 combination for this device.

 My questions are:
 1) what is the data rate that Spartan-3 can do over the ethernet
interface ? (can it do 100 Mbps practically ?)
 2) Xilinx seems to be providing a MAC core that is quite costly
(5000$) are there any alternative MACs.
    (rather, is the MAC that is available on opencores website capable
of gives 100 Mbps and above)
 3) As Microblaze too costs a significant sum (~20000$), any other
opensource processor core + TCPIP stack
 people have had success with ?
 4) Can i put Gigabit MAC inside Spartan FPGAs or will i need to
choose other FPGA if i plan to upgrade
    this device to Gigabit ETH later ? (Spartan-3 series is being
looked at because of its affordability)

Any other solution that you think is capable of giving higher data
rates on ETH i/f is welcome.

Thanks for your time.
-mp

Article: 127281
Subject: Xilinx MAC experience ?
From: "Robert Lacoste" <use-contact-at-www-alciom-com-for-email>
Date: Mon, 17 Dec 2007 08:40:21 +0100
Links: << >>  << T >>  << A >>
Hi all,

We are going to start a project in which we will need to use the hardware 
MAC Ethernet module in a Virtex 5 for high speed transferrs (around 400KB/s) 
through 1000BT. I'm looking for experience feedbacks : Anyone who has used 
tihs module without a third-party TCP/IP stack (as we have in mind to 
generate static size UDP frames directly from the logic due to the high 
throughput we need, may be with the help of a microblaze for configuration 
of addresses etc) ? Difficulties ? Quality of the documentation ? Virtex 4 
vs 5 ? Reference designs or good sources of information ?

Many thanks,
Yours,
Robert 



Article: 127282
Subject: Re: Ethernet data rates using Spartan-3 FPGA
From: "comp.arch.fpga" <ksulimma@googlemail.com>
Date: Mon, 17 Dec 2007 00:30:53 -0800 (PST)
Links: << >>  << T >>  << A >>
At these low quantities I suggest that you use a device that has
ethernet MACs included. (Virtex-4 FX, Virtex-5 LXT).
The chip will be more expensive, but development will be quicker and
simpler and you don't need to pay for
the ethernet core license. Also, you get gigabit ethernet with that
solution which would otherwise cost 25k$.
In V4FX you also get a PowerPC.

Apart from that, spartan-3 should be easily able to handle 100mbps, at
least with the local link ethernet cores
and some hardware support for packet building.

Kolja Sulimma


On 17 Dez., 06:03, tmpstr <tmpstr...@yahoo.com> wrote:
> Hi,
>  I am trying to build a device that will need to move huge amounts of
> data (~500 MB) from a device under test to PC.
>  The current device uses USB 2.0 to transfer data.
>  As the device will now be located at a distance from the people using
> it ( in a lab 200 mts away or in another city) ETH interface (100 Mbps
> and above) is being planned.
>  The device will be built in small quantities ~100 or so, therefore
> onetime cost also needs to be low.
>  I am planning to use a Spartan-3 (XC3S1200E or so) + (softcore +
> TCPIP stack)+ MAC + external PHY
>  combination for this device.
>
>  My questions are:
>  1) what is the data rate that Spartan-3 can do over the ethernet
> interface ? (can it do 100 Mbps practically ?)
>  2) Xilinx seems to be providing a MAC core that is quite costly
> (5000$) are there any alternative MACs.
>     (rather, is the MAC that is available on opencores website capable
> of gives 100 Mbps and above)
>  3) As Microblaze too costs a significant sum (~20000$), any other
> opensource processor core + TCPIP stack
>  people have had success with ?
>  4) Can i put Gigabit MAC inside Spartan FPGAs or will i need to
> choose other FPGA if i plan to upgrade
>     this device to Gigabit ETH later ? (Spartan-3 series is being
> looked at because of its affordability)
>
> Any other solution that you think is capable of giving higher data
> rates on ETH i/f is welcome.
>
> Thanks for your time.
> -mp


Article: 127283
Subject: Re: Xilinx MAC experience ?
From: "Robert Lacoste" <use-contact-at-www-alciom-com-for-email>
Date: Mon, 17 Dec 2007 10:19:49 +0100
Links: << >>  << T >>  << A >>
Of course I'm talking of 400Mb/s and not KB...

"Robert Lacoste" <use-contact-at-www-alciom-com-for-email> a écrit dans le 
message de news: 476627e9$0$863$ba4acef3@news.orange.fr...
> Hi all,
>
> We are going to start a project in which we will need to use the hardware 
> MAC Ethernet module in a Virtex 5 for high speed transferrs (around 
> 400KB/s) through 1000BT. I'm looking for experience feedbacks : Anyone who 
> has used tihs module without a third-party TCP/IP stack (as we have in 
> mind to generate static size UDP frames directly from the logic due to the 
> high throughput we need, may be with the help of a microblaze for 
> configuration of addresses etc) ? Difficulties ? Quality of the 
> documentation ? Virtex 4 vs 5 ? Reference designs or good sources of 
> information ?
>
> Many thanks,
> Yours,
> Robert
> 



Article: 127284
Subject: Re: sampling error between 2 clocks
From: "Symon" <symon_brewer@hotmail.com>
Date: Mon, 17 Dec 2007 10:13:26 -0000
Links: << >>  << T >>  << A >>
<wxy0624@gmail.com> wrote in message 
news:58990fd1-be0d-410e-85ad-37765935953b@e6g2000prf.googlegroups.com...
> Xilinx V4SX35
> ISE 8.2.03
> Modelsim
>
>
> I got CLKI(300MHz), CLKI_DIV(150MHz) generated through a counter(just
> a flip_flop) clocked by CLKI, both clocks connect to BUFG. Then I use
> CLKI to  sample data generated byCLKI_DIV(width=160bit), simulation
> result in some warnings which said setuptime is not enough during
> sampling. How can  I constraint PAR to get enough setuptime?
>
> Because of funtion request, I can not use DCM and OSERDES. The minimum
> delay between risingedge of CLKI_DIV and CLKI is much more than the
> period of CLKI. I have to make sure all simultaneous data sampled by
> CLKI simultaneously. But actually, there always some bits sampled a
> period(CLKI) later or earlier. I can constraint the max delay from the
> last 150MHz flip-flop to the first 300MHz flip-flop, but how can I
> constraint the minimum delay?
>
>
> Thank you!!

Dear Whoever,
Use CLKI to clock _all_ the synchronous elements. Use CLKI_DIV as the clock 
enable for all the synchronous elements you were going to clock with 
CLKI_DIV.
HTH., Syms. 



Article: 127285
Subject: global clock (gclk) input at xilinx virtex4 fpga
From: "Denkedran Joe" <denkedranjoe@googlemail.com>
Date: Mon, 17 Dec 2007 11:18:15 +0100
Links: << >>  << T >>  << A >>
Hi there,

I'm using a Virtex4 FX100 FPGA (package FF1517) in a board design and I 
wonder if it is enough to use just one gclk input on the device or if it's 
advisable to use more than one due to the large package size...? Does it 
make any difference where I put the gclk input(s)? Thank you for your 
support...

Regards Joe 



Article: 127286
Subject: Debugging EDK DDR interface
From: Guru <ales.gorkic@email.si>
Date: Mon, 17 Dec 2007 02:43:38 -0800 (PST)
Links: << >>  << T >>  << A >>
Hi all,

We built a custom board with Spartan3 1200E and Quimonda 32MBx16 DDR
(HYB25DC512160CF-6). The schematic is similar to Spartan3E StarterKit.
Now I am trying to put the DDR to life in EDK using Microblaze 6 and
OPB_DDR.
Well the thing does not work from the begginig as usual.
I have properly swapped all of the address lines (including bank
address) and other busses to MicroBlaze endianicity.

The Xilinx Memory test works with 8 bit transfers up to 4 MB, then it
fails. Other tests (32 and 16bit fails from the start).

Does anybody has any useful hints how to identify the problems
(pinout, timing...) in such cases (without oscillocope if possible).

Cheers,

Ales

Article: 127287
Subject: Re: Xilinx Dual processor design
From: pablo.huerta@gmail.com
Date: Mon, 17 Dec 2007 03:31:58 -0800 (PST)
Links: << >>  << T >>  << A >>
Hi,

I'm actually working in a port of xilkernel for SMP systems. It
actually works with 2 to 8 MicroBlazes, but I haven't ported it to PPC
yet.
Performance is not as good as expected due to the impossibility of
using data caches.

For using with PPC I think you can use eCos, that has support for
PowerPC and SMP systems, but I have never used it.

Regards,

Pablo H


On 24 nov, 19:46, naresh <naresh...@gmail.com> wrote:
> Hi all
> I am using Xilinx dual processor reference design suite to develop
> dual processor (xapp996) system on virtex-2 pro.
> I want to port an operating system on to this design
> Is it possible to port an OS that uses this dual-core system.
> Please help if anybody worked with this reference design
>
> Thanks


Article: 127288
Subject: Re: What timing constraint value should be set for input/output module?
From: "KJ" <kkjennings@sbcglobal.net>
Date: Mon, 17 Dec 2007 12:08:30 GMT
Links: << >>  << T >>  << A >>

"fl" <rxjwg98@gmail.com> wrote in message 
news:01c70172-7df2-47fa-ae28-2744559f3988@i29g2000prf.googlegroups.com...
> Hi,
> There are several modules in a design, which will be assigned to
> several member in the group. If the system clock is 100MHz, what
> timing constraint value should be for each input Pad-to-Setup and
> Clock-to-Out paths? If clock rate is 200MHz or 300MHz, what about the
> constraints? I doubt it should be proportional to the clock rates.
> Thanks a lot.

To figure out the setup time requirement for a particular input signal, you 
have to know what the specs are for the external device is driving that 
input signal and a few things about the printed circuit board design.  One 
also needs to know just how your design and the external devices are 
communicating (i.e. global clock, source synchronous, serdes, etc.)

Assuming a global clock, clocking the external device and your design, then 
the setup time requirement for an input should be set to
Tsu(max) = Tcp(min) - Tco(max) - Tskew(max) - Tpcb(max)

where Tcp = clock period of the clock that is driving the input signal.
Tco = Clock to output delay of the device that is driving the intput signal.
Tskew = Clock skew between the two devices.
Tpcb = Propogation delay of the signal on the printed circuit board.

Repeat the above for each and every input.

Tskew and Tpcb are both very dependent on the printed circuit board design. 
Given your relatively high (for external signals) clock rate, you'll need to 
pay very close attention to how the clocks and signals all get routed. 
Signal degradation caused by reflections and stubs and such will not have 
time to settle out before the next clock edge occurs.

To figure out the clock to output requirement (again for a global clock 
arrangement) you do essentially the same thing, but now you need to consider 
the setup time requirement of the external device

Tco(max)  = Tcp(min) - Tsu(max) - Tskew(max) - Tpcb(max)

Both of these are assuming that the external devices and your design are 
both operating off of the same edge of the clock and that whatever is 
generating the clock signal is generating a clean, fast edge.

If you're really communicating at 100+ MHz, you might want to consider 
source synchronous clocking instead where there is no 'global' clock, 
instead each interface generates and sends a clock signal along with the 
data.

As you can see from both equations, the clock period does enter into 
figuring out the requirements since it defines the upper bound that you keep 
whittling things away from in order to determine the design requirement.  It 
also depends on the specs for the external devices that you're talking to, 
the signaling method and the printed circuit board design requirements.

Kevin Jennings 



Article: 127289
Subject: Re: global clock (gclk) input at xilinx virtex4 fpga
From: Marc Randolph <mrand@my-deja.com>
Date: Mon, 17 Dec 2007 05:09:45 -0800 (PST)
Links: << >>  << T >>  << A >>
On Dec 17, 4:18 am, "Denkedran Joe" <denkedran...@googlemail.com>
wrote:
> Hi there,
>
> I'm using a Virtex4 FX100 FPGA (package FF1517) in a board design and I
> wonder if it is enough to use just one gclk input on the device or if it's
> advisable to use more than one due to the large package size...? Does it
> make any difference where I put the gclk input(s)? Thank you for your
> support...
>
> Regards Joe

Howdy Joe,

If the clock tree's within a device is designed correctly (most all of
them are - haven't heard of any problems with V4), then one clock
should work just fine on a device of any size - and is actually
preferable since you only have one clock domain to deal with.

Have fun,

   Marc

Article: 127290
Subject: How to use a generic memory with Xilinx ?
From: Nicolas Matringe <nic_o_mat@msn.com>
Date: Mon, 17 Dec 2007 05:11:31 -0800 (PST)
Links: << >>  << T >>  << A >>
Hello all
I am struggling with ISE and CoreGen to generate a memory block that
would be customizable (mainly in depth & width) through generic
parameters.
The Memory block generator datasheet seems to indicate that this is
possible but does not explain how. All there is is the parameters
list.

ISE keeps telling me "Port <xxxx> of instance <blk_mem_inst> has
different type in definition <blk_mem>"
I don't have any component named blk_mem (I had generated one with the
GUI but I have deleted it and removed from ISE project)

Thanks in advance
Nicolas

Article: 127291
Subject: Re: Xilinx MAC experience ?
From: Teo <vit.matteo@gmail.com>
Date: Mon, 17 Dec 2007 05:27:33 -0800 (PST)
Links: << >>  << T >>  << A >>
On 17 Dic, 08:40, "Robert Lacoste" <use-contact-at-www-alciom-com-for-
email> wrote:
> We are going to start a project in which we will need to use the hardware
> MAC Ethernet module in a Virtex 5 for high speed transferrs (around 400KB/s)
> through 1000BT. I'm looking for experience feedbacks : Anyone who has used
> tihs module without a third-party TCP/IP stack (as we have in mind to
> generate static size UDP frames directly from the logic due to the high
> throughput we need, may be with the help of a microblaze for configuration
> of addresses etc) ? Difficulties ? Quality of the documentation ? Virtex 4
> vs 5 ? Reference designs or good sources of information ?

Hi,
take a look at the GSRD reference design from Xilinx. It uses the
Treck TCP/IP stack. Of course you have to evaluate its cost related to
your project, but with the GSRD I can obtain more than 700Mbit/s on
TCP.

Matteo

Article: 127292
Subject: Re: PCI Parallel port card for JTAG / programming?
From: pes <none@none.com>
Date: Mon, 17 Dec 2007 15:37:25 +0100
Links: << >>  << T >>  << A >>
Anton Erasmus wrote:
> On Tue, 11 Dec 2007 08:57:27 -0800, Peter Wallace <pcw@karpy.com>
> wrote:
> 
>> On Mon, 10 Dec 2007 16:31:44 -0800, ee_ether wrote:
>>
>>> Hi,
>>>
>>> I need a PCI parallel port card since the new PC is "legacy free".  I
>>> use parallel port based JTAG debuggers and programmers for micros
>>> (AVRs), CPLDs (Xilinx/Altera/Lattice) and FPGAs (Xilinx/Altera).
>>>
>>> Which PCI parallel cards work or don't work for you?  Tried it under
>>> Linux?
>>>
>>> Seems like most PCI parallel cards are based on chipsets from Netmos --
>>> any luck with these?
>>>
>>> Thanks.
>> I have a NetMos parallel card and it works fine (at least for Xilinx
>> Parallel cable III JTAG)
> 
> http://www.sunix.com.tw has a few cards that can be mapped to legacy
> ports. These work without problems once mapped to the legacy ports.
> 
> Regards
>   Anton Erasmus

Hi,

I didn' t succedded to work with a Sunix Multi-I/O Universal PCI 4079A.
But it' s ok with a Netmos PCI 9835 Multi I/O Controller

Article: 127293
Subject: Re: global clock (gclk) input at xilinx virtex4 fpga
From: Tim Wescott <tim@seemywebsite.com>
Date: Mon, 17 Dec 2007 09:11:35 -0600
Links: << >>  << T >>  << A >>
On Mon, 17 Dec 2007 11:18:15 +0100, Denkedran Joe wrote:

> Hi there,
> 
> I'm using a Virtex4 FX100 FPGA (package FF1517) in a board design and I
> wonder if it is enough to use just one gclk input on the device or if
> it's advisable to use more than one due to the large package size...?
> Does it make any difference where I put the gclk input(s)? Thank you for
> your support...
> 
> Regards Joe

The multiple clock inputs are for when you need to support a board design 
that has multiple clock domains, not for driving the same clock to the 
whole FPGA.

I would expect that using multiple clock inputs for the "same" clock 
would just give you multiple clock domains within the device, with all 
the attendant trials and tribulations of synchronizing between them, only 
for no reason.

-- 
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com

Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html

Article: 127294
Subject: Re: Why the core dynamic power isn't 0 when the toggle rate is 0
From: austin <austin@xilinx.com>
Date: Mon, 17 Dec 2007 07:19:20 -0800
Links: << >>  << T >>  << A >>
Rebecca,

The only way to save power is to gate the clock, and turn it off before
it gets to the global clock tree.  This may be problematic for timing.

The Xilinx clock tree and the software already turns off unused leaves
of the tree, so saving any power power will require shutting clocks off
completely.

Austin

Rebecca wrote:
> Thank you all very much for your detailed explanation. I know there is
> quite power consumption of the static power in the modern FPGAs.  I am
> talking the dynamic power only here.  Austin points that when toggle
> rate is 0, except power consumption of the clock tree, some internal
> logic, for example, the internal logic in a FF, will still change when
> clock toggled and induce power consumption.  However, this case
> doesn't have much sense.
> I am asking this question because I am considering the design based on
> coarse-grained function blocks, where some blocks may be active and
> others may be idle. I was thinking to use the power consumption at
> togglerate=0 for those idle parts. In current FPGA, is there any
> technology to support a power-efficient solution for this case? For
> example, if the clock for each of the coarse-grain function block is
> provided via a DCM, can a particular clock be disabled and then no
> dynamic power will be consumed for the idle parts?
> 
> Thanks again,
> Rebecca

Article: 127295
Subject: Re: Why the core dynamic power isn't 0 when the toggle rate is 0
From: Rebecca <pang.dudu.pang@hotmail.com>
Date: Mon, 17 Dec 2007 07:26:11 -0800 (PST)
Links: << >>  << T >>  << A >>
Austin:
Thank you very much for your reply,
Rebecca


Article: 127296
Subject: Re: Ethernet data rates using Spartan-3 FPGA
From: "ereader" <rats@myhouse.com>
Date: Mon, 17 Dec 2007 07:31:55 -0800
Links: << >>  << T >>  << A >>
Spartan 3E can do Gb ethernet. No problem.
There are cheaper mac cores than xilinx.

"tmpstr" <tmpstrbox@yahoo.com> wrote in message 
news:53e396a7-68ee-4139-bb80-25861068c927@i12g2000prf.googlegroups.com...
> Hi,
> I am trying to build a device that will need to move huge amounts of
> data (~500 MB) from a device under test to PC.
> The current device uses USB 2.0 to transfer data.
> As the device will now be located at a distance from the people using
> it ( in a lab 200 mts away or in another city) ETH interface (100 Mbps
> and above) is being planned.
> The device will be built in small quantities ~100 or so, therefore
> onetime cost also needs to be low.
> I am planning to use a Spartan-3 (XC3S1200E or so) + (softcore +
> TCPIP stack)+ MAC + external PHY
> combination for this device.
>
> My questions are:
> 1) what is the data rate that Spartan-3 can do over the ethernet
> interface ? (can it do 100 Mbps practically ?)
> 2) Xilinx seems to be providing a MAC core that is quite costly
> (5000$) are there any alternative MACs.
>    (rather, is the MAC that is available on opencores website capable
> of gives 100 Mbps and above)
> 3) As Microblaze too costs a significant sum (~20000$), any other
> opensource processor core + TCPIP stack
> people have had success with ?
> 4) Can i put Gigabit MAC inside Spartan FPGAs or will i need to
> choose other FPGA if i plan to upgrade
>    this device to Gigabit ETH later ? (Spartan-3 series is being
> looked at because of its affordability)
>
> Any other solution that you think is capable of giving higher data
> rates on ETH i/f is welcome.
>
> Thanks for your time.
> -mp 



Article: 127297
Subject: Re: Xilinx MAC experience ?
From: "John Aderseen" <John@Aderseen.com>
Date: Mon, 17 Dec 2007 16:35:02 +0100
Links: << >>  << T >>  << A >>
Hello Rob,

Interesting question. From my experience, even if people from Xilinx will 
tend to make you think that it is not all that complicated (take a look at 
the GSRD design indeed - it mixes PPC405 - multiport sdram controller 
amongst others !) if you are not well experienced in VHDL programming for 
Xilinx components , I wouldn't go there. You will most probably get it up 
and running however at those transfer rates, things start getting very 
touchy and you may end up spending lots and lots of time on those little 
details that make all the difference !

Let us know how it turns out !

Regards,
John

"Robert Lacoste" <use-contact-at-www-alciom-com-for-email> a écrit dans le
message de news: 476627e9$0$863$ba4acef3@news.orange.fr...
> Hi all,
>
> We are going to start a project in which we will need to use the hardware
> MAC Ethernet module in a Virtex 5 for high speed transferrs (around
> 400KB/s) through 1000BT. I'm looking for experience feedbacks : Anyone who
> has used tihs module without a third-party TCP/IP stack (as we have in
> mind to generate static size UDP frames directly from the logic due to the
> high throughput we need, may be with the help of a microblaze for
> configuration of addresses etc) ? Difficulties ? Quality of the
> documentation ? Virtex 4 vs 5 ? Reference designs or good sources of
> information ?
>
> Many thanks,
> Yours,
> Robert
>




Article: 127298
Subject: Re: PCI Parallel port card for JTAG / programming?
From: Chris H <chris@phaedsys.org>
Date: Mon, 17 Dec 2007 15:43:40 +0000
Links: << >>  << T >>  << A >>
In message 
<996cf8d4-a6ce-495e-afb1-b7aa483920ac@i29g2000prf.googlegroups.com>, 
ee_ether <xjjzdv402@sneakemail.com> writes
>Hi,
>
>I need a PCI parallel port card since the new PC is "legacy free".  I
>use parallel port based JTAG debuggers and programmers for micros
>(AVRs), CPLDs (Xilinx/Altera/Lattice) and FPGAs (Xilinx/Altera).
>
>Which PCI parallel cards work or don't work for you?  Tried it under
>Linux?
>
>Seems like most PCI parallel cards are based on chipsets from Netmos
>-- any luck with these?
>
>Thanks.

Why not just get a USB Jtag?  It must work out cheaper and less messing 
about in the long run?

-- 
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/




Article: 127299
Subject: Re: How do you initialize Xilinx ISOCM memory using DCR interface
From: Peter Ryser <peter.ryser@xilinx.com>
Date: Mon, 17 Dec 2007 08:28:43 -0800
Links: << >>  << T >>  << A >>
David,

that should work. You might need to map the ISOCM within 24 address bits 
(max. address offset in branch instruction) of the SRAM or advise the 
compiler to generate long-jumps.

- Peter


David Hand wrote:
> Peter,
> 
> Peter Ryser wrote:
>> Dave,
>>
>>  > In AR# 19804 at http://www.xilinx.com/support/answers/19804.htm, the
>>
>> you need to look at OCM like caches in a Harvard architecture, i.e. 
>> dedicated blocks for data and instruction accesses. Like it is not 
>> possible during program execution to read data from the instruction 
>> cache it is not possible to read data from the instruction cache. IOCM 
>> is actually a little bit worse because the data (code) gets in there 
>> through the debugger or a program like you have written and not 
>> through a processor initiated load operation, i.e. data loaded by the 
>> processor through a load operation can never end up in the instruction 
>> cache.
>>
>> For OCM that means that the compiler and linker strictly need to 
>> separate instruction and data sections. Now, the newlib libraries are 
>> written something like this:
>>     .text
>> data:
>>     .long 0xdeadbeef
>> code:
>>     lis r2,     data@h
>>     ori r2, r2, data@l
>>     lwz r3, 0(r2)
>>
>> Instruction and data are mixed within a .text segment. When the 
>> debugger or your program loads that code into ISOCM the "data" 
>> variable becomes inaccessible to the "code" because of the Harvard 
>> nature of OCM.
> 
> That part makes sense. It's clear I can't load newlib code into ISOCM. 
> The part I don't understand is why I can't make a call from code that is 
> in ISOCM to a newlib function that is left in say SRAM. In other words, 
> am I allowed to do this:
> 
> void MyISOCMFunction() // Function in ISOCM
>     {
>     c = getchar();    // getchar() in usual place (say SRAM).
> 
>     }
> 
>>
>>> AR# 19099, at http://www.xilinx.com/support/answers/19099.htm, seems to 
>>
>> This problem only occurs on the DOCM and not on the IOCM. The best way 
>> to work around this problem is to turn off interrupts and exceptions 
>> before accessing data in the DOCM. If that does not work, then, yes 
>> you have to write the routines that access the DOCM in assembler or 
>> disassemble the compiler generated code and inspect it so that the 
>> instruction sequence doesn't occur.
> 
> 
> 
> Thanks



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search