Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
At the risk of sounding completely clueless (Hmmmm well I am :) I don't see an EDIF format in the export of circuit maker 2000 ? Also easiest way to import to webpack? Thanks, Fred "Andrew Paule" <lsboogy@qwest.net> wrote in message news:OBQ7b.1591$aT.45652@news.uswest.net... > turn it into an EDIF with no in/out buffers, and use it as a black box. > > Andrew > > juice28 wrote: > > >Maybe a dumb question, but can a schematic built and tested in circuit maker > >2000 be imported to Xilinx webpack and fitted to cpld rather then using > >webpacks schematic entry. > > > >Thanks, > > > >Fred > > > > > > > > > > >Article: 60351
rickman wrote: > > Jim Granville wrote: > > > > Peter Alfke wrote: > > > > > > Let me try to repair the damage I did with my impatience: > > > > > > When capturing data that is asynchronous with the clock, the flip-flop > > > will inevitably go metastable sooner or later. > > > Metastability manifests itself in unpredictable additional clock-to-out delay. > > > The user knows the clock frequency, probably knows the data freqiuency > > > at least roughly, and should know the amount of tolerable extra delay, > > > or the acceptable Mean-Time-Between-Failure. > > > > > > Then one can consult the app note and table and see the connection. > > > MTBF is always inverse proportional to the product of the clock and data frequencies. > > > > > > Last October I published a XilinxTechXclusives paper which shows that at > > > a 300 MHz clock rate and 50 MHz data rate, the MTBF is one microsecond > > > for a total clock-to-Q plus set-up time of 1.0 ns. MTBF then increases a > > > million times for every additional half nanosecond available as extra delay. > > > At 3 ns, the MTBF is over a billion years. > > > All MTBF values must be scaled by the product of the two frequencies: > > > At 100 MHz clock and ~10 MHz data, the MTBF is, therefore, 15 times > > > longer. > > > > Is this correct ? > > Wouldn't the 3.3nS to 10nS increase in clock time, buy you (10-3.3)/0.5 > > lots of 'a million times' scalings ? > > Jim, I have seen your name here before, but I don't know what your > level of understanding of metastability is. So forgive me if I sound > like I am talking down to you. I don't know if you are trying to > discuss fine details of this topic or if you are new to the issues of > metastability. Sorry, was I that unclear ? > If you look up references about metastability you will find that the > MTBF time scales linearly with clock and data rate, but exponentially > with settling time. There is a constant for each part of the equation. > These two constants are what characterize a particular FF design and > process used to build it. > Peter's comment is saying that if you allow > just 3 ns settling time with his rates and parts, you will have an MTBF > of a billion years. Certainly you can go longer and get MTBF times > longer than the age of the universe. So yes, 10 ns would be way more > than enough. I think we are saying the same thing. > > > Or do you mean the time to trigger a Event, not fail due to one ? > > No, a metastable event will happen with a much higher rate based only on > the rate of the clock and data. But it will have no impact on your > circuit if you don't use the output until after the metastability has > settled out. Given a time period this calculation determines how often > the metastable event will persist and cause an error. I was asking Peter for a clarify, only he can know what he meant. > > > What about this issue: > > With a CLK.Data stream, the CLK pulses that are not > > adjacent to the DATA edges, cannot have metastable events, so > > should not enter the scaling ? > > This is already considered in the calculation. That is why the > frequencies are multiplied. The assumption is that the two rates are > truely asynchronous and are not correlated in any way. The the chance > of them happening in just the right timing relation is a function of how > often each of them is occuring. > > > > The best model would seem to be a Data.Aperture AND a Clock.Aperture, > > (both very small, but I don't think they HAVE to be equal ) and > > when they overlap/come closer than a critical time threshold, > > the metatstable dice rolls. What happens after the roll, depends on > > how far away the next clock is (call this the settling tail) > > This may sound good, but it no different than the current model and > would be much harder to measure. It is best not to think too hard about > this, but rather to be a bit on the empirical side. That seems to be > one way that Peter is very smart. His measurements seem very good to me > and many others. > It is no good to rationalize about things you really can't measure. I beg to differ. The best understanding come from finding models that are easy to explain, and can be used in the widest manner, and that also help guide (new) measurements and understanding. Being a designer, I am all for 'hard numbers'. > > > Prediction stats would be an area-overlap basis, and assuming > > async signals ( non zero phase velocity ) the area product would > > be proportional to > > (Data.Aperture/Data.EdgeT) x (Clock.Aperture/Clock.EdgeT) > > > > Typically, Data.EdgeT = Data H or L time > > Clock.EdgeT = ClkPeriod > > > > This is average trigger/dice roll prediction, but the > > actual 'metastable dice roll profile' will depend on the > > phase velocity, and will have peaks much higher than the average. > > I think "phase velocity" is *way* over the top. Before improving on the > current formula, it would be good to find something wrong with it. > Is there anything about it that falls short? Above you state " The assumption is that the two rates are truely asynchronous and are not correlated in any way." If the model cannot cope with other than this hypothetical ideal, that's rather 'falling short' ? The concept of phase velocity is not way over the top, as it introduces the important concept/point of what to do, when 'truely async' does not apply, and also what to do, if you design needs PEAK rather than simple average tolerance. In some designs, that aspect will be important : A system can have an 'average MTBF' number of some years, but still fail a number of times in one hour. IIRC Austin L. gave a good real-world example ? > > > What if your system hits/moves very slowly over this 'phase jackpot' ? > > > > Here, area-mitigation stats are not much use, and you have to rely > > mainly on the settling tail to next clock ( and maybe a small amount on > > the natural system jitter ) > > IIRC Peter quoted 0.05fs virtual aperture time, and > > natural jitter is likely to be some few ps - certainly large relative to > > the aperture ? > > All of this is really just a way to relate what is happening. Since the > noise in the circuit is relatively large, I would expect tons more > jitter in the "window" than the actual width. So really the fs window > is just a concept, not a very real event. Agreed, that's why I called it a virtual aperture. The idea of Clock and data apertures also gives the correct dimensions to the answer. > > > An experimental setup designed to focus on this phase jackpot, > > would give interesting results, and allow peak estimates, as well as a > > higher > > occurance for more usefull Tail stats gathering. > > > > > > Summary : Best predictor model would have Data.Aperture, Clock.Aperture > > and a Settling Tail. > > Exact nature of the settling tail is system measurable over a range > > of a few decades, but extrapolation is dangerous. > > Can you explain how this would be better than the current model? See above. I don't see it as radically different than the current thinking, ( the tail model is the same, only I'd be more cautious about far-extrapolation ) but it does allow better handling of peak/average predictions, and it leads to real measurements to define these two. > > > > So, in short: > > > Metastability is unavoidable. All attempts to avoid it are inherently > > > doomed, but the quantitative impact of metastability is quite tolerable. > > > > > > That's it. > > > > Agreed. I still think from an 'average user' perspective, that a > > specific 'design cell' approach would help. > > > > Also, from a technical detail viewpoint, implementing a > > 'regenerative latch triplet' [Pre-Latch + Flip Flop] or [Dual Flip Flop] > > in a single local space, removes routing delays from one metastable > > tail. > > > > It does NOT 'fix' metastable behaviour, but it does encapsulate it, > > and move it to the best the silicon can provide, and eliminates > > the potentially variable routing delays. > > It also allows for future technical research and improvements to > > reduce the apertures, and the settling tail. > > Or you can just use the double FF approach and require a routing time > for this path that is at least 3 ns less than the clock period. Again, > simple, empirical and effective. But you have to know enough to take those steps, and it is still exposed, more than encapsulated. I'm thinking of the newest breed of graduate, and they represent more the average user than you or I :) -jgArticle: 60352
I'm looking for an ibis model for xilinx's ace mpm , package bg388.Article: 60353
I am just about to go through a 115 page introduction tutorial on the XCESS website for using the Xilinx Webpack 4.x edition. However I will be using the ISE Foundation 4.x edition and want to know if I am wasting my time reading the entire Webpack tutorial to learn how to use the ISE Foundation edition. I am assuming its all the same, with Webpack just having less features. Anyone who is familiar with both editions that can let me know to go ahead with this or STOP - and find a tutorial at Xilinx instead (I need to install the software for their tutes I think) would be much appreciated. Initial stages will be purely schematic entry. VHDL will come later. Regards DaveArticle: 60354
bobi wrote: > I want to start with Xilinx FPGA's. How does one start cheap? I see a > development kit from Digilent for $99 US. > It is for a Spartan IIE. But then I see there is a Spartan III but no > development kit for this. > > What are the options. Can the Spartan IIE 200K gate do a lot of designs. Is > that enough gates? > Depending on your budget I would advice you to take a look at the modular development boards at http://www.burched.biz/. As you become more experienced with FPGAs and your needs grow, these boards will offer you the opportunity to expand.Article: 60355
Hi all, When I ran implementation in Project Manager (Xilinx Foundation F2.1i) I've got following errors: ERROR:NgdHelpers:335 - logical net "SR<11>" has multiple drivers ERROR:NgdHelpers:346 - input pad net "SR<11>" has an illegal connection the same error was for nets SR11 to SR4 which belongs to bus SR[11:0] - four branches of this bus are connected to inputs of lateches e.g. D[7:0]. I'm not using any of input pad for those nets because they are only internal buses, connecting block to block - so there is no pad in this schematics. I don't understand this errors, which says: "input pad net "SR<11>" has an illegal connection" - what input pad ??? Do you know what could be wrong with this and how can I avoid this ? Thanks in advance Sebastian DuszykArticle: 60356
"Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:<HHM7b.410438$Ho3.64641@sccrnsc03>... > "ykagarwal" <yog_aga@yahoo.co.in> wrote in message > news:4d05e2c6.0309092246.2ead33f0@posting.google.com... > > (snip regarding pipelined divider) > > > well my requirement is too for double precision .. would u like to > > suggest me a pipelined > > comp arch book for this purpose.. anyway what is the best way, that's > > what i want to explore first. > > The one I have here is "The Architecture of Pipelined Computers" by Kogge. > > > Xilinx coregen divider core doesn't offer that much width in its > > pipelined divider .. don't know why > > may be xilinx gurus can justify .. anybody knows which algorithm they > > are using ? > > I don't know that, either. It might be because they didn't imagine anyone > wanting to put something like that into an FPGA. They are likely pretty > big, but in some cases it might be worth the size. > > -- glen fine, thanks i cud find the book (bit old edition probably) here but there is no detail abt pipelined divider as such .. anyway if somebody comes across the thing may suggest. and xilinx probably shud give a sequential version at least for larger width (i've made it anyway) --ykaArticle: 60357
Hi, I have a moderately big design (~250K equivalent ASIC gates) in Vertex FPGA. The post place & route simulation in Modelsim takes hours together for simulating about 2-3 ms of input data. This is a time killing step in my product development lifecycle. Moreover if some timing errors occur (evenafter analyzing P&R static timing) more syn-map-par-sim iterations are required with modified timing constraints or higher effort levels etc. There are some recent developments in EDA tools like Mentor Graphics' VStation which cater to problems like I am facing by "actually" simulating on the target hardware. But they are toooooooooo costly (I don't know what makes EDA tool companies to fix such a high price for their products) Is there any alternate way of simulating my design ? Regards, NagarajArticle: 60358
Hi - On 11 Sep 2003 03:03:28 -0700, nagaraj_c_s@yahoo.com (Nagaraj) wrote: >Hi, > I have a moderately big design (~250K equivalent ASIC gates) in >Vertex FPGA. The post place & route simulation in Modelsim takes hours >together for simulating about 2-3 ms of input data. This is a time >killing step in my product development lifecycle. Moreover if some >timing errors occur (evenafter analyzing P&R static timing) more >syn-map-par-sim iterations are required with modified timing >constraints or higher effort levels etc. Are you implying that you see timing failures in post-route simulation that static analysis doesn't reveal? Have you reported this to Xilinx? > There are some recent developments in EDA tools like Mentor >Graphics' VStation which cater to problems like I am facing by >"actually" simulating on the target hardware. But they are toooooooooo >costly (I don't know what makes EDA tool companies to fix such a high >price for their products) > Is there any alternate way of simulating my design ? What's your motivation for doing post-route simulation? Why not just simulate your pre-synthesis code, and use post-route static timing analysis to confirm the timing? I've been doing FPGA design for years, and can count the number of times I've done post-route simulation on the fingers of one hand. It's something I resort to only if I suspect that the place and route tool has a bug that's producing a faulty design. (I mean, this is something I *always* suspect to one degree or another, but usually I can allay my fears in some less time-consuming a way--looking at the post-map trim list, using the FPGA editor, etc.) And if I think the synthesizer is causing problems, I'll do a post-synthesis simulation. The value (or lack thereof) of post-route simulation has been debated here before; take a look in the archives. Bob Perlman Cambrian Design WorksArticle: 60359
This is not my project. It's GB's project. Still you can not build hardware around an unknown sensor. You have to pick an imager and then build the hardware/firmware around it. james On Wed, 10 Sep 2003 20:57:32 -0500, Andrew Paule <lsboogy@qwest.net> wrote: >Use the micro to set up the packets in an FGPA/ASIC under isocronous >control, and stream them out from there, if you can deal with the low >data/frame rates - I used to build large format (4 x 5 and hasseblad) >camera digital inserts using big CCD's - USB (even at 12Mbs is too slow >for this stuff ( a 1Mpixel at 8 bit will require 2/3 sec + overhead to >empty one frame) - tell you boss it sounds cool, but you need 1394 >(firewire) or SCSI to do this worth a DS. If you simply want to capture >one frame and empty it - consider dumping it to RAM and then out from >there. Leaving stuff on a sensor - CCD or CMOS (yes CCD's are CMOS, >both n and p type are built) results in large dark currents that make >them unusable. At 2/3 second, well depth will be a large consideration >here - electrons like to move around. > >Andrew > >james wrote: > >>On Sun, 07 Sep 2003 15:03:39 GMT, "GB" <donotspam_grantbt@jps.net> >>wrote: >> >> >> >>>Hi, >>> >>>I'm a firmware guy pulled into a project well out of my area of >>>expertise. My boss wants to build (essentially) a digital camera >>>using an image sensor chip (1600x1200) and output it's data >>>"as fast as possible" using USB2.0. His initial concept, being >>>that I'm a firmware guy, was to use a "really fast micro." >>>Normally the company is involved with small 8-bitter micro >>>projects, so you can see I'm well out of my normal bounds. >>> >>>Now this seems like a pretty big stretch to me... and I don't see >>>how it can be done without the speed of hardware (the image >>>chips all seem to have clock speeds in the tens of MHz and the >>>USB2 transfer rate is 480Mbps (theor.). Do aspects of this >>>project require an FPGA to keep the data "as fast as possible?" >>>If we use and FPGA for camera-to-RAM and then use a >>>micro for the USB2 part, what kind of fast micros can >>>move data at that rate? >>> >>>Also, this is something that I am sure we will have to contract >>>out, so if you have any past experience with this, please let >>>me know your thoughts (and if you are available). >>> >>>Thanks! >>> >>> >>> >>^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>1600x 1200 is essentially a 2 megapixel camera! >> >>1) First step is to determine what the camera is going to be used for! >> >>Terrestial or astronomical or video photography >> >>2) Pick an imager! Either Sony, TI, Kodak, or Panasonic to name a few. >> >>3) From the Imager specs you can derive how fast the data can be >>clocked out of the imager. Most imagers will transfer the image area >>into a serial register one line at a time. How fast this is depends on >>how fast you can clock the serial register. Transfer speeds differ >>from vendor to vendor. >> >>4) Then build the circuitry around the imager based on its ability to >>transfer the full image as fast as you want and that meet your cost >>goals. >> >>Again depending on what you determine as reasonably fast will effect >>the cost of the imager along with its size. Another consideration will >>be the speed of the ADC. That can slow things down also. Even if you >>can clock the serial register of the imager out at 20Mhz rate, if the >>ADC sasmple rate is 10 MHz that is as fast as you can get the pixel >>data out. >> >>IF your imager's max clock frequency for the serial register is 20 >>MHz., you can clock a 1600 pixel row out in about 80 uS. Or the whole >>image area out of the chip in about 100 mS. So your microC or FPGA >>will have to read the ADC once every 50 nS or so during the readout >>period. >> >>There are some CPU cores as well as USB cores that can drop into an >>FPGA. You can build a large FIFO or add onboard flash to store the >>picture. >> >>It is not crazy at all in fact it is quite doable. The two key items >>in a digital camera is the imager and the ADC. All the rest is digital >>hardware that is well suited for an ASIC or FPGA. >> >> >>james >> >> >>Article: 60360
Is the following achievable? - Two PCs render graphics and output it to their graphics card DVI outputs - A FPGA based board reads these two data streams (maybe using Silicon Image receivers?) and processes the data (basically a comparison of pixel values) - The processed data is output via DVI. This output could be used as an input to another FPGA board and so forth.. What components (FPGAs, DVI receivers, transmitters) would one need? Thanks for all answers.. AndreArticle: 60361
Dear fellows, according http://www.xilinx.com/ise/embedded/gdb_debugger.htm Xilinx has slightly modified the gdb used for debugging of the VirtexIIPro PPC405. Doeas anybody know whether there are Sources or at least patches of/for this gdb laying around somewhere? I couldn't find anything so far and I guess that they are not publicly available. Actually, I would like to use the debugger under native Linux with ddd frontend rather than inside a VMware with Windows/Cygwin as I'm doing it right now. So far, I tried to use a standard gdb-5.3. It is working "a little bit", but does show up some problems (of course, obviously the reason why Xilinx modified it). Thanks for any comments! Regards, MarioArticle: 60362
Hi, there, I have a design contains 12K logic cells and 300K bit memories and runs at 5MHz. I compiled it for an EP20K1500 device and it worked (tested on FPGA). Then I wanted to switch 8 output bits from pin location AF1, AF2, AF3, AF4, G4, G5, G6 and H2 to DAC0 pins (AE1, AD1 -- AD6 and AC6). I had the smart compilation option turned on when I successfully compiled and tested the design. So when I move 8-bit signals to new pin locations it should be as easy as a few top-level re-wiring. No need to recompile and re-fit the design. However, the tool spent one hour re-do the whole synthesis and fitting. Worst of all, the compiled the design does NOT work on FPGA at all! When I say "does NOT work at all", it means not only I cannot get anything from the DAC0 pins, but also I cannot bootload the FPGA (I have bootloader in the design) after I download the new design. Somehow during the re-compilation, the bootloader (8051 processor, intruction ROM and RAM) are affected, which should have nothing to do with the 8 wires I changed at the top level. Any help/suggestions are greatly appreciated. Yi Zhang ENQ Semi.Article: 60363
In article <umnslv4dggc3i11nrfikf1knlf9t1ldjob@4ax.com>, Robert Myers <rmyers@rustuck.com> wrote: >The enormous throughput of GPU's, special-purpose processors like >GRAPE, and speculation about PlayStation keep my interest in what >might be possible if you wanted to build a special-purpose compute >engine. Special-purpose compute engines are unavoidably rather expensive; http://www.xilinx.com/apps/sp3app.htm gives a couple of interesting directions to look in for the technology level that's actually close to affordable. The second-biggest Spartan3 chip has 96 18x18->36 multipliers, which gives you eight 54x54 and a 72x72 to work with. A medium-sized XC2VP50 Virtex2 Pro has 232 of the 18x18 multipliers, and a pair of PPC440 CPUs -- and sixteen Hypertransport links -- but probably costs as much as a Madison 1300MHz/3MB (IE $2000 or so). But I don't know what speed you can clock that great array of multipliers at. [note I've cross-posted this to comp.arch.fpga in case they know the speed and cost details off the top of their heads; the idea is to implement an array of double-precision FMA units on an FPGA, to see how they'd compare to the few much-faster-clocked FMAs on high-end CPUs. I don't know how exotic the Spartan3/4000 or the XC2VP50 are] TomArticle: 60364
I meant to write a lengthy rebuttal and explanation, but rickman said it all. Thanks ! Peter AlfkeArticle: 60365
In article <v9x*dzc2p@news.chiark.greenend.org.uk>, Thomas Womack <twomack@chiark.greenend.org.uk> wrote: >>The enormous throughput of GPU's, special-purpose processors like >>GRAPE, and speculation about PlayStation keep my interest in what >>might be possible if you wanted to build a special-purpose compute >>engine. > >Special-purpose compute engines are unavoidably rather expensive; >http://www.xilinx.com/apps/sp3app.htm gives a couple of interesting >directions to look in for the technology level that's actually >close to affordable. > >The second-biggest Spartan3 chip has 96 18x18->36 multipliers, which gives >you eight 54x54 and a 72x72 to work with. A medium-sized XC2VP50 >Virtex2 Pro has 232 of the 18x18 multipliers, and a pair of PPC440 >CPUs -- and sixteen Hypertransport links -- but probably costs as much >as a Madison 1300MHz/3MB (IE $2000 or so). But I don't know what >speed you can clock that great array of multipliers at. I've done some back-of-the-enveloping. It looks reasonable to make a vector procesor that is about 1/2 to 1/4 the throughput of the Earth Simulator vector processor in a large FPGA (eg, V2Pro), eg 8 lane, 8 FP MAC/cycle, but only ~250 MHz. The interesting parts really come in not in the computation but in the communication: how do you do a low latency, high throughput, flexible network to connect a whole BUNCH of computing elements. This is where the FPGAs get interesting, with 3 Gbps SERDESes being standard and 10 Gbps SERDESes on the near-term horizon. With a cut-through routing, the latency per hop is fairly low (~20-30 cycles at the 3 GHz stream clock), so a network could be made that isn't full connectivity like a crossbar, but is rather fast and routed. >[note I've cross-posted this to comp.arch.fpga in case they know the >speed and cost details off the top of their heads; the idea is to >implement an array of double-precision FMA units on an FPGA, to see >how they'd compare to the few much-faster-clocked FMAs on high-end >CPUs. I don't know how exotic the Spartan3/4000 or the XC2VP50 are] > >Tom -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 60366
Correct, for the XC3S50J devices, you'll want to keep ISE5.2i around. What does the "i" stand for? For me, it stands for "I don't know". :-). I believe that it stands for ISE (Integrated Software Environment) to distinguish it from earlier "Foundation" software. Still, just a guess. --------------------------------- Steven K. Knapp Applications Manager, Xilinx Inc. Spartan-3/II/IIE FPGAs http://www.xilinx.com/spartan3 --------------------------------- Spartan-3: Make it Your ASIC "Eric Smith" <eric-no-spam-for-me@brouhaha.com> wrote in message news:qh7k4g429a.fsf@ruckus.brouhaha.com... > "Steven K. Knapp" <steve.knappNO#SPAM@xilinx.com> writes: > > WebPack ISE 5.2i only supports the device called the XC3S50J (note the 'J'), > > which has no block RAM or multipliers. > > > > WebPack ISE 6.1i, due out at the end of September, supports the XC3S50 (no > > 'J'), which has four 18Kbit block RAMs, four 18x18 hardware multipliers, > > and two Digital Clock Managers (DCMs). > > Will 6.1i and future versions continue to support the XC3S50J? Or do we > need to keep the 5.2i release around for those? > > What does the "i" at the end of the software version mean, anyhow?Article: 60367
Sorry - no offense meant. Andrew james wrote: >This is not my project. It's GB's project. > >Still you can not build hardware around an unknown sensor. You have to >pick an imager and then build the hardware/firmware around it. > >james > > >On Wed, 10 Sep 2003 20:57:32 -0500, Andrew Paule <lsboogy@qwest.net> >wrote: > > > >>Use the micro to set up the packets in an FGPA/ASIC under isocronous >>control, and stream them out from there, if you can deal with the low >>data/frame rates - I used to build large format (4 x 5 and hasseblad) >>camera digital inserts using big CCD's - USB (even at 12Mbs is too slow >>for this stuff ( a 1Mpixel at 8 bit will require 2/3 sec + overhead to >>empty one frame) - tell you boss it sounds cool, but you need 1394 >>(firewire) or SCSI to do this worth a DS. If you simply want to capture >>one frame and empty it - consider dumping it to RAM and then out from >>there. Leaving stuff on a sensor - CCD or CMOS (yes CCD's are CMOS, >>both n and p type are built) results in large dark currents that make >>them unusable. At 2/3 second, well depth will be a large consideration >>here - electrons like to move around. >> >>Andrew >> >>james wrote: >> >> >> >>>On Sun, 07 Sep 2003 15:03:39 GMT, "GB" <donotspam_grantbt@jps.net> >>>wrote: >>> >>> >>> >>> >>> >>>>Hi, >>>> >>>>I'm a firmware guy pulled into a project well out of my area of >>>>expertise. My boss wants to build (essentially) a digital camera >>>>using an image sensor chip (1600x1200) and output it's data >>>>"as fast as possible" using USB2.0. His initial concept, being >>>>that I'm a firmware guy, was to use a "really fast micro." >>>>Normally the company is involved with small 8-bitter micro >>>>projects, so you can see I'm well out of my normal bounds. >>>> >>>>Now this seems like a pretty big stretch to me... and I don't see >>>>how it can be done without the speed of hardware (the image >>>>chips all seem to have clock speeds in the tens of MHz and the >>>>USB2 transfer rate is 480Mbps (theor.). Do aspects of this >>>>project require an FPGA to keep the data "as fast as possible?" >>>>If we use and FPGA for camera-to-RAM and then use a >>>>micro for the USB2 part, what kind of fast micros can >>>>move data at that rate? >>>> >>>>Also, this is something that I am sure we will have to contract >>>>out, so if you have any past experience with this, please let >>>>me know your thoughts (and if you are available). >>>> >>>>Thanks! >>>> >>>> >>>> >>>> >>>> >>>^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>>1600x 1200 is essentially a 2 megapixel camera! >>> >>>1) First step is to determine what the camera is going to be used for! >>> >>>Terrestial or astronomical or video photography >>> >>>2) Pick an imager! Either Sony, TI, Kodak, or Panasonic to name a few. >>> >>>3) From the Imager specs you can derive how fast the data can be >>>clocked out of the imager. Most imagers will transfer the image area >>>into a serial register one line at a time. How fast this is depends on >>>how fast you can clock the serial register. Transfer speeds differ >>> >>> >>>from vendor to vendor. >> >> >>>4) Then build the circuitry around the imager based on its ability to >>>transfer the full image as fast as you want and that meet your cost >>>goals. >>> >>>Again depending on what you determine as reasonably fast will effect >>>the cost of the imager along with its size. Another consideration will >>>be the speed of the ADC. That can slow things down also. Even if you >>>can clock the serial register of the imager out at 20Mhz rate, if the >>>ADC sasmple rate is 10 MHz that is as fast as you can get the pixel >>>data out. >>> >>>IF your imager's max clock frequency for the serial register is 20 >>>MHz., you can clock a 1600 pixel row out in about 80 uS. Or the whole >>>image area out of the chip in about 100 mS. So your microC or FPGA >>>will have to read the ADC once every 50 nS or so during the readout >>>period. >>> >>>There are some CPU cores as well as USB cores that can drop into an >>>FPGA. You can build a large FIFO or add onboard flash to store the >>>picture. >>> >>>It is not crazy at all in fact it is quite doable. The two key items >>>in a digital camera is the imager and the ADC. All the rest is digital >>>hardware that is well suited for an ASIC or FPGA. >>> >>> >>>james >>> >>> >>> >>> >>> > > >Article: 60368
Bob Perlman wrote: > I've been doing FPGA design for years, and can count the number of > times I've done post-route simulation on the fingers of one hand. I agree. If an FPGA design is 100% synchronous, sims functionally, and passes post-route static timing, post-synth/route simulation is rarely indicated. -- Mike TreselerArticle: 60369
Followup to: <cd4a30b8.0309110632.7ee5a488@posting.google.com> By author: enq_semi@yahoo.com (enq_semi) In newsgroup: comp.arch.fpga > > However, the tool spent one hour re-do the whole synthesis and > fitting. Worst of all, the compiled the design does NOT work on FPGA > at all! > > When I say "does NOT work at all", it means not only I cannot get > anything from the DAC0 pins, but also I cannot bootload the FPGA (I > have bootloader in the design) after I download the new design. > > Somehow during the re-compilation, the bootloader (8051 processor, > intruction ROM and RAM) are affected, which should have nothing to do > with the 8 wires I changed at the top level. > Try removing the db directory and recompiling. -hpa -- <hpa@transmeta.com> at work, <hpa@zytor.com> in private! If you send me mail in HTML format I will assume it's spam. "Unix gives you enough rope to shoot yourself in the foot." Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64Article: 60370
Followup to: <bjmki8$1dam$1@justice.itsc.cuhk.edu.hk> By author: "clsan" <clsan@cuhk.edu.hkk> In newsgroup: comp.arch.fpga > > I have tried using the console uart instead of debug uart, it still can't > work. > > I have connected the mouse to the board by a serial cable. Any one have any > document that I can refer to? > Thank you very much. > Well, you need a null modem in between (the ports on the Nios card are wired DCE; the mouse expects to be connected to a DTE.) Then, you need to find the exact combination of control signal outputs that the mouse expects. This is how the mouse is powered!! -hpa -- <hpa@transmeta.com> at work, <hpa@zytor.com> in private! If you send me mail in HTML format I will assume it's spam. "Unix gives you enough rope to shoot yourself in the foot." Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64Article: 60371
Check these IEEE references: Efficient designs of unified 2's complement division and square root algorithm and architecture Sau-Gee Chen; Chieh-Chih Li; TENCON '94. IEEE Region 10's Ninth Annual International Conference. Theme: 'Frontiers of Computer Technology'. Proceedings of 1994 , 22-26 Aug. 1994 Page(s): 943 -947 vol.2 A new pipelined divider with a small lookup table Jong-Chul Jeong; Woong Jeong; Hyun-Jae Woo; Seung-Ho Kwak; Woo-Chan Park; Moon-Key Lee; Tak-don Han; ASIC, 2002. Proceedings. 2002 IEEE Asia-Pacific Conference on , 6-8 Aug. 2002 Page(s): 33 -36 Efficient semisystolic architectures for finite-field arithmetic Jain, S.K.; Song, L.; Parhi, K.K.; Very Large Scale Integration (VLSI) Systems, IEEE Transactions on , Volume: 6 Issue: 1 , March 1998 Page(s): 101 -113Article: 60372
I have a new idea how to simplify the metstable explanation and calculation. Following Albert Einstein's advice that everything should be made as simple as possible, but not any simpler: We all agree that the extra metastable delay occurs when the data input changes in a tiny timing window relative to the clock edge. We also agree that the metastable delay is a strong function of how exactly the data transition hits the center of that window. That means, we can define the width of the window as a function of the expected metastable delay. Measurements on Virtex-IIPro flip-flops showed that the metastable window is: • 0.07 femtoseconds for a delay of 1.5 ns. • The window gets a million times smaller for every additional 0.5 ns of delay. Every CMOS flip-flop will behave similarily. The manufacturer just has to give you the two parameters ( x femtoseconds at a specified delay, and y times smaller per ns of additional delay) The rest is simple math, and it even applies to Jim's question of non-asynchronous data inputs. I like this simple formula because it directly describes the actual physical behavior of the flip-flop, and gives the user all the information for any specific systems-oriented statistical calculations. Peter Alfke, Xilinx ApplicationsArticle: 60373
"Steven K. Knapp" wrote: > > Correct, for the XC3S50J devices, you'll want to keep ISE5.2i around. > > What does the "i" stand for? For me, it stands for "I don't know". :-). I > believe that it stands for ISE (Integrated Software Environment) to > distinguish it from earlier "Foundation" software. Still, just a guess. IIRC, the 'i' was added on the first version that was Internet aware. I can't say what features were made possible by having web connectivity, but they have never dropped the designation. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 60374
Andrew Paule <lsboogy@qwest.net> writes: > USB (even at 12Mbs is too slow > for this stuff ( a 1Mpixel at 8 bit will require 2/3 sec + overhead to > empty one frame) - tell you boss it sounds cool, but you need 1394 > (firewire) or SCSI to do this worth a DS. According to the Subject: line he is intending to use USB2. That is 480MBit/s, according to http://www.usb.org/faq/ans2#q1 . Should be fast enough. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Programmer, Archer, Blacksmith - hardware runs the world, software controls the hardware code generates the software, have you coded today?
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z