Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
>Get some development boards and actually build a few things. Over here, >http://digilent.us/ and http://www.knjn.com/ are two sources, although I >see that KNJN does have an EU outlet now, over at >http://www.knjn.com/eu/ShopBoards_USB2.html > >At any rate, build a few real things and not just syntax exercises. >Document some of the successes (and problems, with how you solved them!) >on a blog that you could point potential employers to. > >As regards experience, yes employers would love to get somebody with a >few years but those folks are not always available. If you enjoy >"playing with" FPGAs enough that you work with them for the fun and the >challenge then that may be enough to get an employer to take a look. > >Once you have done a few personal projects, go ahead and apply to some >of those "5 years experience" positions. Be honest about what you've >done and are doing; you're likely to get at least some positive >responses. > >-- >Rich Webb Norfolk, VA > Hello, thanks for response! I've already got a Spartan-3E kit, now developing my own DDR controller (since MIG is device dependent) and my next task is to do a MAC for external PHY on the board then send packets using UDP. I will post my experiences on my blog surely, but the idea that I could go for the job that requires 5 years experience really worth taking a try :) If we look to the future: the student is able to use memory (sdram,ddram,bram,etc), LAN, VGA, then what? Buy a n-thousand worth devboard with PCI-E and other useful stuff? It is too expensive then... --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147926
On Jun 2, 3:08=A0pm, "Socrates" <socconf@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com> wrote: > >Get some development boards and actually build a few things. Over here, > >http://digilent.us/andhttp://www.knjn.com/are two sources, although I > >see that KNJN does have an EU outlet now, over at > >http://www.knjn.com/eu/ShopBoards_USB2.html > > >At any rate, build a few real things and not just syntax exercises. > >Document some of the successes (and problems, with how you solved them!) > >on a blog that you could point potential employers to. > > >As regards experience, yes employers would love to get somebody with a > >few years but those folks are not always available. If you enjoy > >"playing with" FPGAs enough that you work with them for the fun and the > >challenge then that may be enough to get an employer to take a look. > > >Once you have done a few personal projects, go ahead and apply to some > >of those "5 years experience" positions. Be honest about what you've > >done and are doing; you're likely to get at least some positive > >responses. > > >-- > >Rich Webb =A0 =A0 Norfolk, VA > > Hello, thanks for response! > I've already got a Spartan-3E kit, now developing my own DDR controller > (since MIG is device dependent) and my next task is to do a MAC for > external PHY on the board then send packets using UDP. I will post my > experiences on my blog surely, but the idea that I could go for the job > that requires 5 years experience really worth taking a try :) > If we look to the future: the student is able to use memory > (sdram,ddram,bram,etc), LAN, VGA, then what? Buy a n-thousand worth > devboard with PCI-E and other useful stuff? It is too expensive then... = =A0 > > --------------------------------------- =A0 =A0 =A0 =A0 > Posted throughhttp://www.FPGARelated.com Do not restrict your search to the US market. Getting the appropriate visa is extremely difficult, borderline impossible. But as a citizen of the EU, you can try in many countries: Germany, Scandinavia, Netherlands, and the UK (most of which accept English as a working language. US Immigration has put up an enormous hurdle, but luckily, FPGAs are being used all over the world, as you can detect in this newsgroup. Good luck! Peter AlfkeArticle: 147927
Socrates <socconf@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com> wrote: >>Get some development boards and actually build a few things. Over here, >>http://digilent.us/ and http://www.knjn.com/ are two sources, although I >>see that KNJN does have an EU outlet now, over at >>http://www.knjn.com/eu/ShopBoards_USB2.html (snip) > Hello, thanks for response! > I've already got a Spartan-3E kit, now developing my own DDR controller > (since MIG is device dependent) and my next task is to do a MAC for > external PHY on the board then send packets using UDP. I will post my > experiences on my blog surely, but the idea that I could go for the job > that requires 5 years experience really worth taking a try :) If you do those projects, you should be ready to apply for some FPGA related positions. Maybe not for one that requires the five years experience, but maybe as an assistent to such a position. Then after a few years, you will have enough experience and the company will know you well. > If we look to the future: the student is able to use memory > (sdram,ddram,bram,etc), LAN, VGA, then what? Buy a n-thousand worth > devboard with PCI-E and other useful stuff? It is too expensive then... If you develop the DDR and MAC from scratch you should be in good shape. I believe that there are some around that could be adapted for use with a little less work than starting from nothing. Most important is to learn to think in terms of hardware, and not as software. -- glenArticle: 147928
I've got a Spartan-6 design with two different clocks. They're generated from the same 20 MHz reference on two PLLs, and are called WB_SYS.CLK_I (50 MHz) and clk128. They're treated as entirely asynchronous by the design logic, and data passing between them is safe by design, and so in order to get rid of the slew of timing errors thrown by MAP I've added the following to my UCF: ============================================================= NET "WB_SYS.CLK_I" TNM_NET = FFS "BUS_FLOPS"; NET "clk128" TNM_NET = FFS "FAST_FLOPS"; TIMESPEC "TS_ASYNC1" = FROM "BUS_FLOPS" TO "FAST_FLOPS" TIG; TIMESPEC "TS_ASYNC2" = FROM "FAST_FLOPS" TO "BUS_FLOPS" TIG; ============================================================= When I try to map this design, I'm getting two different, but possibly related, batches of timing errors. I've got a whole mess like this first one, which always represents a hold violation from a flop output to the address input of a LUTRAM, both on the same clock. ============================================================= Slack (hold path): -0.152ns (requirement - (clock path skew + uncertainty - data path)) Source: M68K_BRIDGE/WB_OUT.ADDR_2 (FF) Destination: CHANNEL/CHANS[4].CHAN_ON.CHAN/FIFO_FLT/Mram_ram_k_lo12/DP (RAM) Requirement: 0.000ns Data Path Delay: -0.152ns (Levels of Logic = 0) Positive Clock Path Skew: 0.000ns Source Clock: WB_SYS.CLK_I rising at 20.000ns Destination Clock: WB_SYS.CLK_I rising at 20.000ns Clock Uncertainty: 0.000ns ============================================================= The second is a batch of setup violations located inside of the instantiated asynchronous FIFO cores. These are a bunch of copies of a CoreGen, specifically "Fifo_Generator family Xilinx,_Inc. 5.3". I'm not enough of a masochist to try to roll my own asynch FIFO after all of Peter's warnings. The violation is in going across the clock boundary; I had assumed that the TIG should have taken care of these. ============================================================= Slack: -2.172ns (requirement - (data path - clock path skew + uncertainty)) Source: CHANNEL/ltrigmux_1 (FF) Destination: CHANNEL/CHANS[1].CHAN_ON.CHAN/FIFO/FIFO/BU2/U0/grf.rf/mem/gbm.gbmg.gbmga.ngecc.bmg/blk_mem_generator/valid.cstr/ramloop[0].ram.r/s6_noinit.ram/SDP.SIMPLE_PRIM18.ram (RAM) Requirement: 0.312ns Data Path Delay: 1.690ns (Levels of Logic = 2)(Component delays alone exceeds constraint) Clock Path Skew: -0.094ns (1.519 - 1.613) Source Clock: WB_SYS.CLK_I rising at 320.000ns Destination Clock: clk128 rising at 320.312ns Clock Uncertainty: 0.700ns Clock Uncertainty: 0.700ns ((TSJ^2 + DJ^2)^1/2) / 2 + PE Total System Jitter (TSJ): 0.070ns Discrete Jitter (DJ): 0.489ns Phase Error (PE): 0.453ns ============================================================= I've tried this design out under both ISE 11.5 and 12.1 with the same results. Anyone have any ideas on one/both of these problems? -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 147929
On Jun 3, 10:05=A0am, rickman <gnu...@gmail.com> wrote: > > =A0Rather than the uC+CPLD the marketing types are chasing, I would fin= d > > a CPLD+RAM more useful, as there are LOTS of uC out there already, and > > if they can make 32KB SRAM for sub $1, they should be able to include > > it almost for free, in a medium CPLD. > > > -jg > > I won't argue with that for a moment. =A0But deciding what to put in a > part and which flavors to offer in what packages is decided in the > land of marketing. =A0As much as I whine and complain, I guess I have to > assume they know *something* about their jobs. The product managers are understandably blinkered by what has gone before, and what they sell now, so in the CPLD market it is very rare to see a bold step. The CoolRunner was the last bold step I recall, and that was not made by a traditional vendor product manager, but by some new blood. Altera, Atmel, Lattice and Xilinx have slowed right down on CPLD releases, to almost be in 'run out' mode. -jgArticle: 147930
On Jun 2, 6:12=A0pm, Rob Gaddi <rga...@technologyhighland.com> wrote: > I've got a Spartan-6 design with two different clocks. =A0They're > generated from the same 20 MHz reference on two PLLs, and are called > WB_SYS.CLK_I (50 MHz) and clk128. =A0They're treated as entirely > asynchronous by the design logic, and data passing between them is safe > by design, and so in order to get rid of the slew of timing errors > thrown by MAP I've added the following to my UCF: > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > NET "WB_SYS.CLK_I" TNM_NET =3D FFS "BUS_FLOPS"; > NET "clk128" TNM_NET =3D FFS "FAST_FLOPS"; > TIMESPEC "TS_ASYNC1" =3D FROM "BUS_FLOPS" TO "FAST_FLOPS" TIG; > TIMESPEC "TS_ASYNC2" =3D FROM "FAST_FLOPS" TO "BUS_FLOPS" TIG; > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > When I try to map this design, I'm getting two different, but possibly > related, batches of timing errors. =A0I've got a whole mess like this > first one, which always represents a hold violation from a flop output > to the address input of a LUTRAM, both on the same clock. > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Slack (hold path): =A0 =A0 =A0-0.152ns (requirement - (clock path skew + > uncertainty - data path)) > =A0 =A0Source: =A0 =A0 =A0 =A0 =A0 =A0 =A0 M68K_BRIDGE/WB_OUT.ADDR_2 (FF) > =A0 =A0Destination: > CHANNEL/CHANS[4].CHAN_ON.CHAN/FIFO_FLT/Mram_ram_k_lo12/DP (RAM) > =A0 =A0Requirement: =A0 =A0 =A0 =A0 =A00.000ns > =A0 =A0Data Path Delay: =A0 =A0 =A0-0.152ns (Levels of Logic =3D 0) > =A0 =A0Positive Clock Path Skew: 0.000ns > =A0 =A0Source Clock: =A0 =A0 =A0 =A0 WB_SYS.CLK_I rising at 20.000ns > =A0 =A0Destination Clock: =A0 =A0WB_SYS.CLK_I rising at 20.000ns > =A0 =A0Clock Uncertainty: =A0 =A00.000ns > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > The second is a batch of setup violations located inside of the > instantiated asynchronous FIFO cores. =A0These are a bunch of copies of a > CoreGen, specifically "Fifo_Generator family Xilinx,_Inc. 5.3". =A0I'm no= t > enough of a masochist to try to roll my own asynch FIFO after all of > Peter's warnings. =A0The violation is in going across the clock boundary; > I had assumed that the TIG should have taken care of these. > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Slack: =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0-2.172ns (requirement - (data p= ath - clock path > skew + uncertainty)) > =A0 =A0Source: =A0 =A0 =A0 =A0 =A0 =A0 =A0 CHANNEL/ltrigmux_1 (FF) > =A0 =A0Destination: > CHANNEL/CHANS[1].CHAN_ON.CHAN/FIFO/FIFO/BU2/U0/grf.rf/mem/gbm.gbmg.gbmga.= ngecc.bmg/blk_mem_generator/valid.cstr/ramloop[0].ram.r/s6_noinit.ram/SDP.S= IMPLE_PRIM18.ram > (RAM) > =A0 =A0Requirement: =A0 =A0 =A0 =A0 =A00.312ns > =A0 =A0Data Path Delay: =A0 =A0 =A01.690ns (Levels of Logic =3D 2)(Compon= ent delays > alone exceeds constraint) > =A0 =A0Clock Path Skew: =A0 =A0 =A0-0.094ns (1.519 - 1.613) > =A0 =A0Source Clock: =A0 =A0 =A0 =A0 WB_SYS.CLK_I rising at 320.000ns > =A0 =A0Destination Clock: =A0 =A0clk128 rising at 320.312ns > =A0 =A0Clock Uncertainty: =A0 =A00.700ns > > =A0 =A0Clock Uncertainty: =A0 =A0 =A0 =A0 =A00.700ns =A0((TSJ^2 + DJ^2)^1= /2) / 2 + PE > =A0 =A0 =A0Total System Jitter (TSJ): =A00.070ns > =A0 =A0 =A0Discrete Jitter (DJ): =A0 =A0 =A0 0.489ns > =A0 =A0 =A0Phase Error (PE): =A0 =A0 =A0 =A0 =A0 0.453ns > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > I've tried this design out under both ISE 11.5 and 12.1 with the same > results. =A0Anyone have any ideas on one/both of these problems? > > -- > Rob Gaddi, Highland Technology > Email address is currently out of order Hello Rob, Your second issue is because you only put the FFS into your time groups that you TIG. Put RAMS in the time groups as well to get rid of that. Regards, John McCaskill www.FasterTechnology.comArticle: 147931
>On May 28, 3:47 pm, Hauke D <hau...@zero-g.net> wrote: >> A while back, Andy Ross posted a Perl script that pulls together the >> many steps required to program a Digilent Nexys2 board under Linux: >> >> http://groups.google.com/group/comp.arch.fpga/browse_thread/thread/c7... >> >> I just wanted to let everyone know that this script, along with the >> firmware it uses, is hosted at the "ixo-jtag" project on SourceForge: >> >> http://ixo-jtag.sourceforge.net/ >> >> Regards, >> -- Hauke D > >Is there any way to make it permanent yet? nexy2prog is really handy >and i've had no problems with it. > >I e-mailed Digilent back in February to see if they were going to >create a Linux client, this is what I got. > >> Hello ###, >> >> A Linux port of Adept is currently under development and should be released in a couple of months. Sorry for the inconvenience. >> >> Please feel free to contact us with any questions you may have. >> >> Regards, >> >> Levi Bailey >> Digilent Inc. > >Well, time's up I suppose. > That sounds great. The last time I heard from them there was nothing planned. I did try Andy's script on the Nexys2 board and the does work but I also have a xilinx platform cable (dlc10) and the speed difference is quite noticable. John Eaton --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147932
On Jun 3, 2:23=A0am, "Socrates" <socconf@n_o_s_p_a_m.gmail.com> wrote: > Hello, > I am a third year student, but interested in FPGAs and linking my future > with this area of electronics. To have a point of view of my future, I've > browsed some job search pages using "FPGA" in search field. However almos= t > all of the offers are for "FPGA seniors", experience not less that ~5 > years. How can I gain this experience if it is almost impossible to get > employed? > > FPGA course in my uni is only on major degree studies, so I am doing a > "self-education" and learning FPGA design by myself, however PACMAN > implementation or something like that only gives experience on the syntax > itself, but not the real problems encountered every day. I have some > opinions: > - Search for intern programs at various companies that use FPGAs (its als= o > very hard to find, because many companies ask if You are eligible to work > in USA. If not - chances are minimum (I am from Lithuania, EU)). > - Try to find a job when you get paid by pay-per-module (however, I did n= ot > find anything like this). > - The last chance I think I could do is to contact the company itself, as= k > for the ability to work for free, since I need experience and if everythi= ng > goes OK, maybe they will employ me in the future. But how long the studen= t > could work qualitatively without being paid if the task is really hard an= d > takes a long time? The result could end up with nothing: no experience an= d > no work done. > > Tell me Your opinions :) > > --------------------------------------- =A0 =A0 =A0 =A0 > Posted throughhttp://www.FPGARelated.com A good jumping stone is www.opencores.org. Publish your work there, and may be somebody will like what you do and hire you. When we hire fresh grads, we always ask for samples of their work. You have to also try to understand where companies come from. Lots of grads are absolutely clueless. Often times to a point that is is scary. They do the course work, but can't think outside the box they have studied. That makes it very tough for companies. You seem to be more advanced. Show people what you can do, and I am sure you won't have a problem finding a job. Cheers, rudiArticle: 147933
If you look in the fifo user guide I believe it gives you some ucf constraints to use when using async fifos. Also its not quite as hard as you think to create an async fifo. Jon --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147934
Hi, Have you got some examples of Spartan6 power consumption betwwen differents configurations? Thank youArticle: 147935
On Jun 1, 3:34=A0pm, John_H <newsgr...@johnhandwork.com> wrote: > You're still concerned about this? =A0Maybe you don't yet understand the > issues from how Gabor explained the situation. > > FPGAs have flexible routing resources able to implement generic logic > interconnects. =A0The placement and routing of logic will determine > explicitly how fast the FPGA can possibly run. =A0The earliest days of > FPGA place & route may have seen a stronger attempt at getting "best > times" but runtimes were miserable and results were often short of > complete. =A0A competing tool adopted a "just enough" approach to place > & route, coming up with solutions which meet "at least" the > constraints given to the tools. =A0The quality of results improved > significantly and Xilinx eventually bought the technology. > > The "just enough" philosophy results in placements and routes that > meet the constraints given to the tool but do not strive to improve > upon those numbers. =A0If the clock period is such that the registers > feeding the BlockRAM don't need to have the minimum achievable delays, > they typically won't. =A0If you expect to have small setup and hold > times from I/O pins to the BlockRAM, there's a fundamental disconnect: > the setup and hold times are "internal times" for the FPGA and do not > include the system level implementation of the I/O pins and global > clock buffers. =A0If implementing with I/O pins, the tools will try > their best to meet the setup and hold constraints the user provides > but no better and will often adjust the input delays of the various > pins (including the clock) to help attain those numbers. > > Understanding the timing models means getting to know the chip better > at the silicon level. =A0If you understand I/O, clocking, CLBs, and > routing, you're well on your way to interpreting timing results > properly. =A0Being able to take the internal timing numbers for an FPGA > and apply those before you design takes a higher level of undestanding > often acquired from reading the FPGA user guide, app notes, and > running through timing analysis with the timing details turned on in > the logic path analysis. > > The tool tries to give the user what's needed, not what's "best." > Even then there are limits based on what *can* be implemented within > the constraints of placement and routing. thanks for your detailed reply!! .. It cleared some misconceptions..Article: 147936
On Jun 2, 7:35=A0pm, -jg <jim.granvi...@gmail.com> wrote: > On Jun 3, 10:05=A0am, rickman <gnu...@gmail.com> wrote: > > > > =A0Rather than the uC+CPLD the marketing types are chasing, I would f= ind > > > a CPLD+RAM more useful, as there are LOTS of uC out there already, an= d > > > if they can make 32KB SRAM for sub $1, they should be able to include > > > it almost for free, in a medium CPLD. > > > > -jg > > > I won't argue with that for a moment. =A0But deciding what to put in a > > part and which flavors to offer in what packages is decided in the > > land of marketing. =A0As much as I whine and complain, I guess I have t= o > > assume they know *something* about their jobs. > > =A0The product managers are understandably blinkered by what has gone > before, and what they sell now, so in the CPLD market it is very rare > to see a bold step. > > =A0The CoolRunner was the last bold step I recall, and that was not made > by a traditional vendor product manager, but by some new blood. > > =A0Altera, Atmel, Lattice and Xilinx have slowed right down on CPLD > releases, to almost be in 'run out' mode. I've been busy with work the last few months so I tend to forget what I read about trends. I seem to recall that Xilinx has announced something with an MCU in it and not the PPC they used in the past. Do I remember right? Is X coming out with an FPGA with an ARM? Personally, I prefer something other than an ARM inside an FPGA. I want a CPU that executes each instruction in a single clock cycle and has very seriously low interrupt latency. That is why I designed my own CPU at one point. ARM CPUs with FPGAs seem to be oriented to people who want to use lots of memory and run a real time OS. Not that that an ARM or a real time OS is a bad thing, I just want something closer to the metal. If I could get a good MCU in an FPGA (which would certainly have some adequate memory) in a "convenient" package, that would really make my day. I don't have to have the analog stuff, but 5 volt tolerance would certainly be useful. That alone would take two chips off my board and maybe more. RickArticle: 147937
>rickman wrote: >> Can anyone confirm or dispute it the relative quality of Actel tools? >> Am I mistaken about them? > >I am speaking only about my own personal point of view of mine... >and under Windows (have not yet had time to re-try under Fedora) > >The Actel tools take a while to get used too, >like most big SW suites. It's its own world... >It is not particularly terrible, I can do >mostly what I want, inside the bounds of reality >and the target chip's capacity. > > I have had similar experience. I try to keep my flow tool independent in this process Actel tools are ok. I can take my design, add the files, synthesize, PAR, and create a config file. I have not had a reason to use any of the additional tools and have not run into any major issues. The learning curve for me was roughly the same as ISE or Quartus. Possibly a little less because of prior experience with other tools. chris --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147938
>A good jumping stone is www.opencores.org. Publish your work there, >and may be somebody will like what you do and hire you. > >When we hire fresh grads, we always ask for samples of their work. > >You have to also try to understand where companies come from. >Lots of grads are absolutely clueless. Often times to a point that >is is scary. They do the course work, but can't think outside the >box they have studied. That makes it very tough for companies. > >You seem to be more advanced. Show people what you can do, >and I am sure you won't have a problem finding a job. > >Cheers, >rudi > So it is worth to show my projects on various sites, then add links to my resume? Actually its interesting if the employer will take a look or not :) What about internships in various companies? For example if such a company like Siemens offers an internship, but that is not related to FPGAs, is it worth of spending a year in that company? It will be one year wasted for FPGA experience, but on the other hand experience in huge company is also interesting. Which one weights more on the employment scales? --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147939
On 6/3/2010 9:01 AM, Socrates wrote: >> A good jumping stone is www.opencores.org. Publish your work there, >> and may be somebody will like what you do and hire you. >> >> When we hire fresh grads, we always ask for samples of their work. >> >> You have to also try to understand where companies come from. >> Lots of grads are absolutely clueless. Often times to a point that >> is is scary. They do the course work, but can't think outside the >> box they have studied. That makes it very tough for companies. >> >> You seem to be more advanced. Show people what you can do, >> and I am sure you won't have a problem finding a job. >> >> Cheers, >> rudi >> > > So it is worth to show my projects on various sites, then add links to my > resume? Actually its interesting if the employer will take a look or not :) > What about internships in various companies? For example if such a company > like Siemens offers an internship, but that is not related to FPGAs, is it > worth of spending a year in that company? It will be one year wasted for > FPGA experience, but on the other hand experience in huge company is also > interesting. Which one weights more on the employment scales? > > --------------------------------------- > Posted through http://www.FPGARelated.com I'd never hire anyone who says "I know FPGAs and that's it." All engineering experience translates, as does anything that causes you to think. I've worked on DC/DC converter circuits that were best understood by analogy to a mechanical transmission. As a bare minimum you'll be an infinitely better FPGA designer if you've got the foggiest clue of what the rest of the circuit is doing. If you're offered an internship in any engineering field, take it. Experience is always better than none. -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 147940
>I'd never hire anyone who says "I know FPGAs and that's it." All >engineering experience translates, as does anything that causes you to >think. I've worked on DC/DC converter circuits that were best >understood by analogy to a mechanical transmission. As a bare minimum >you'll be an infinitely better FPGA designer if you've got the foggiest >clue of what the rest of the circuit is doing. > >If you're offered an internship in any engineering field, take it. >Experience is always better than none. > >-- >Rob Gaddi, Highland Technology >Email address is currently out of order > Ok, thanks for everyone, I have a good point of view now :) --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147941
"Socrates" <socconf@n_o_s_p_a_m.gmail.com> wrote: >Hello, >I am a third year student, but interested in FPGAs and linking my future >with this area of electronics. To have a point of view of my future, I've >browsed some job search pages using "FPGA" in search field. However almost >all of the offers are for "FPGA seniors", experience not less that ~5 >years. How can I gain this experience if it is almost impossible to get >employed? Do some sensible hobby projects. If your projects have enough body they will make up for a lot of experience. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------Article: 147942
If anyone out there uses Xilinx ISE Design Suite 11, I need your help!! According to this information [link]http://www.fpgarelated.com/usenet/fpga/show/662-1.php[/link] ISE 11 should be running verilog 2001, however when I try to run the arithmetic shift ">>>" it does not populate 1's for negative numbers as it should. Does anyone have any advice on how to correct this problem? --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147943
If anyone out there uses Xilinx ISE Design Suite 11, I need your help!! According to this information [link]http://www.fpgarelated.com/usenet/fpga/show/662-1.php[/link] ISE 11 should be running verilog 2001, however when I try to run the arithmetic shift ">>>" it does not populate 1's for negative numbers as it should. Does anyone have any advice on how to correct this problem? --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147944
If anyone out there uses Xilinx ISE Design Suite 11, I need your help!! According to this information [link]http://www.fpgarelated.com/usenet/fpga/show/662-1.php[/link] ISE 11 should be running verilog 2001, however when I try to run the arithmetic shift ">>>" it does not populate 1's for negative numbers as it should. Does anyone have any advice on how to correct this problem? --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147945
On Jun 3, 1:48=A0pm, "shannon" <sesilver@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com> wrote: > If anyone out there uses Xilinx ISE Design Suite 11, I need your help!! > > According to this information > [link]http://www.fpgarelated.com/usenet/fpga/show/662-1.php[/link] > ISE 11 should be running verilog 2001, however when I try to run the > arithmetic shift ">>>" it does not populate 1's for negative numbers as i= t > should. > > Does anyone have any advice on how to correct this problem? =A0 =A0 =A0 = =A0 > > --------------------------------------- =A0 =A0 =A0 =A0 > Posted throughhttp://www.FPGARelated.com There was a thread on this topic in comp.lang.verilog You can always work around the problem using >> like: reg [15:0] signed_vector; . . . some_vector <=3D {16{signed_vector[15]},signed_vector} >> shift_val; (where shift_val is less than or equal to 16) . . . to copy the MSB as you shift right. Did you check if this was fixed in 12.1? Regards, GaborArticle: 147946
Alan: Thanks for these tips. 1. I tried simulating the known input sequence using a 64 point FFT. The Matlab and Xilinx bit accurate C model values match. I am surprised because the bit accurate model gives (maybe asymmetric output??) different values for my original 4096 pt FFT. for example: xiilnx variable xk_re and matlab variable out1: xk_re[0] =3D 117020.000000, out1[0] =3D 117020.000000 xk_re[1024] =3D 517.000000, out1[16] =3D 517.000000 xk_re[2048] =3D 494.000000, out1[32] =3D 494.000000 xk_re[3072] =3D 517.000000, out1[48] =3D 517.000000 > 2. Look for being off by an output stride permutation or transpose. I am not sure how to check this. Can you elaborate ? > 3. Finally, look out for starting the input a few cycles too late. I am following the simulation waveforms given as per the datasheet. So as per the datasheet, my xn_0 is read when xn_index=3D3. However, before I even go there, the bit accurate values don=92t match. Vivek On May 31, 7:38=A0pm, ajjc <a...@optngn.com> wrote: > On May 27, 11:43=A0am, Vivek Menon <vivek.meno...@gmail.com> wrote: > > > > > > > I am using the Xilinx FFT core v6.0 from ISE 10.1 and I am trying to > > verify my values with a FFT calculation run on Matlab. > > > My FFT ISIM simulation runs fine and the simulation values match with > > the bit accurate model provided by Xilinx. But the order of data is > > very different from Matlab. My Xilinx FFT block is configured as 4096 > > pt Pipelined steaming I/O with natural order for floating point > > values. On Matlab, I use the fft function to determine my values. > > > For example: This is the result obtained from Xilinx Logicore v6.0 FFT > > bit accurate model. LHS are the Xilinx values and the Matlab values > > are on the right. Though the first value matches, everything else > > differs. > > > But on closer observation, the 64th value obtained from Xilinx > > simulation is same as the 2nd value, i.e. > > xk_re[64] 6002.34 =3D Matlab:f2_r[1] 6002.34 > > > Xilinx CoreGen FFT real value xk_re > > =A0xk_re[0] 117467 Matlab:f2_r[0] 117467 > > =A0xk_re[1] 1180.82 Matlab:f2_r[1] 6002.34 > > =A0xk_re[2] 789.918 Matlab:f2_r[2] 126.469 > > =A0xk_re[3] -548.049 Matlab:f2_r[3] -3516.04 > > =A0xk_re[4] 3580.31 Matlab:f2_r[4] 1111.52 > > =A0xk_re[5] -2871.39 Matlab:f2_r[5] 2287.02 > > =A0xk_re[6] -1346.19 Matlab:f2_r[6] 753.863 > > =A0xk_re[7] 137.655 Matlab:f2_r[7] 1865.09 > > =A0xk_re[8] 372.955 Matlab:f2_r[8] 26.8989 > > =A0xk_re[9] -914.218 Matlab:f2_r[9] -167.196 > > =A0xk_re[10] -463.463 Matlab:f2_r[10] 788.141 > > =A0xk_re[11] -875.82 Matlab:f2_r[11] 977.657 > > =A0xk_re[12] -56.4141 Matlab:f2_r[12] 1087.54 > > =A0xk_re[13] -544.345 Matlab:f2_r[13] 617.382 > > =A0xk_re[14] -526.662 Matlab:f2_r[14] 397.022 > > =A0xk_re[15] -333.39 Matlab:f2_r[15] 181.981 > > =A0xk_re[16] 216.825 Matlab:f2_r[16] 938 > > =A0xk_re[17] -1274.68 Matlab:f2_r[17] 237.049 > > =A0xk_re[18] -521.784 Matlab:f2_r[18] 256.72 > > =A0xk_re[19] -670.137 Matlab:f2_r[19] 897.621 > > =A0xk_re[20] -82.1999 Matlab:f2_r[20] 9.97936 > > =A0xk_re[21] -119.689 Matlab:f2_r[21] 180.858 > > =A0xk_re[22] -905.393 Matlab:f2_r[22] 1481.65 > > =A0xk_re[23] -276.808 Matlab:f2_r[23] 557.669 > > =A0xk_re[24] -219.717 Matlab:f2_r[24] 823.101 > > =A0xk_re[25] -261.175 Matlab:f2_r[25] 272.421 > > =A0xk_re[26] -850.324 Matlab:f2_r[26] 552.726 > > =A0xk_re[27] -263.26 Matlab:f2_r[27] 960.132 > > =A0xk_re[28] -235.821 Matlab:f2_r[28] 678.96 > > =A0xk_re[29] 35.2347 Matlab:f2_r[29] 859.29 > > =A0xk_re[30] -72.7756 Matlab:f2_r[30] 731.413 > > =A0xk_re[31] -518.872 Matlab:f2_r[31] 378.714 > > =A0xk_re[32] -249.704 Matlab:f2_r[32] 829 > > =A0xk_re[33] -296.007 Matlab:f2_r[33] 378.714 > > =A0xk_re[34] -117.027 Matlab:f2_r[34] 731.413 > > =A0xk_re[35] -695.503 Matlab:f2_r[35] 859.29 > > =A0xk_re[36] 172.311 Matlab:f2_r[36] 678.96 > > =A0xk_re[37] -165.246 Matlab:f2_r[37] 960.132 > > =A0xk_re[38] -249.19 Matlab:f2_r[38] 552.726 > > =A0xk_re[39] 42.4766 Matlab:f2_r[39] 272.421 > > =A0xk_re[40] 229.6 Matlab:f2_r[40] 823.101 > > =A0xk_re[41] -318.204 Matlab:f2_r[41] 557.669 > > =A0xk_re[42] 266.831 Matlab:f2_r[42] 1481.65 > > =A0xk_re[43] -1009.16 Matlab:f2_r[43] 180.858 > > =A0xk_re[44] -735.485 Matlab:f2_r[44] 9.97936 > > =A0xk_re[45] -297.726 Matlab:f2_r[45] 897.621 > > =A0xk_re[46] -294.509 Matlab:f2_r[46] 256.72 > > =A0xk_re[47] 762.229 Matlab:f2_r[47] 237.049 > > =A0xk_re[48] 699.253 Matlab:f2_r[48] 938 > > =A0xk_re[49] -213.069 Matlab:f2_r[49] 181.981 > > =A0xk_re[50] -413.187 Matlab:f2_r[50] 397.022 > > =A0xk_re[51] -349.572 Matlab:f2_r[51] 617.382 > > =A0xk_re[52] -63.1866 Matlab:f2_r[52] 1087.54 > > =A0xk_re[53] -845.444 Matlab:f2_r[53] 977.657 > > =A0xk_re[54] -965.319 Matlab:f2_r[54] 788.141 > > =A0xk_re[55] 70.1314 Matlab:f2_r[55] -167.196 > > =A0xk_re[56] -157.18 Matlab:f2_r[56] 26.8989 > > =A0xk_re[57] -646.377 Matlab:f2_r[57] 1865.09 > > =A0xk_re[58] -2769.69 Matlab:f2_r[58] 753.863 > > =A0xk_re[59] -2634.43 Matlab:f2_r[59] 2287.02 > > =A0xk_re[60] -1729.81 Matlab:f2_r[60] 1111.52 > > =A0xk_re[61] -2057.06 Matlab:f2_r[61] -3516.04 > > =A0xk_re[62] -5549.6 Matlab:f2_r[62] 126.469 > > =A0xk_re[63] -3951.05 Matlab:f2_r[63] 6002.34 > > =A0xk_re[64] 6002.34 Matlab:f2_r[64] 555.219 > > =A0xk_re[65] 3082.1 Matlab:f2_r[65] 2875 > > Hard to tell exactly what is going on. > The Matlab output is symmetric...the Xilinx output isn't. > > Here are two possibilities: > > !. I usually try a simple input sequence with a known output. > My favorite is : > > re_in =3D [0.125, zeros(1,63)]; > im_in=3D [0, -0.125, zeros(1,62)]; > cmplx_in=3D complex(re_in, im_in) > M_out =3D fft(cmplx_in); > > The output is a sin() or cos() fcn in the real and imag parts of the > output (I can't remember which one right now), > and if you plot the real and cmplx output components, you'll get an > ellipse. > > 2. Look for being off by an output stride permutation or transpose. > 3. Finally, look out for starting the input a few cycles too late. > > alanArticle: 147947
On Thu, 03 Jun 2010 12:47:14 -0500, "shannon" <sesilver@n_o_s_p_a_m.gmail.com> wrote: >If anyone out there uses Xilinx ISE Design Suite 11, I need your help!! > >According to this information >[link]http://www.fpgarelated.com/usenet/fpga/show/662-1.php[/link] >ISE 11 should be running verilog 2001, however when I try to run the >arithmetic shift ">>>" it does not populate 1's for negative numbers as it >should. > >Does anyone have any advice on how to correct this problem? Yup. 1) Try the code in a reputable simulator. If the simulator gives sign extension and Xilinx doesn't, report the bug to Xilinx. 2) Assuming (much the most likely scenario) that you did (1) and discovered that the simulator, too, did an unsigned right shift, blush temporarily and then recognise that signed arithmetic in Verilog-2001 is not to be taken lightly. It is full of obscure and counter-intuitive traps, and you wouldn't be the first person to fall headlong into them. Read my post in http://groups.google.com/group/comp.lang.verilog/browse_thread/thread/549b3d83f7e1fb11/ for much more detail. After reading that stuff, come back here with the *exact* code (including declarations) if you still think the tool is at fault. -- Jonathan BromleyArticle: 147948
On Jun 4, 3:24=A0am, rickman <gnu...@gmail.com> wrote: > I remember right? =A0Is X coming out with an FPGA with an ARM? Yes, but not at the "convenient" package end of the spectrum!!. This is an ARM9, and mentions 28nm It will STILL need Code memory, so only moves the problem. > Personally, I prefer something other than an ARM inside an FPGA. =A0I > want a CPU that executes each instruction in a single clock cycle and > has very seriously low interrupt latency. =A0That is why I designed my > own CPU at one point. =A0ARM CPUs with FPGAs seem to be oriented to > people who want to use lots of memory and run a real time OS. =A0Not > that that an ARM or a real time OS is a bad thing, I just want > something closer to the metal. Nice here would be what I'd call an Asymmetric-multi-core - a CPU that is deliberately split into a smaller hard real time portion, and a more general OS level portion. Usually the bits you want hard real time, are NOT large, and it helps to ring fence them from a moreo general layer. The XMOS hard thread-slices are one way to achieve this. > > If I could get a good MCU in an FPGA (which would certainly have some > adequate memory) in a "convenient" package, that would really make my > day. =A0I don't have to have the analog stuff, but 5 volt tolerance > would certainly be useful. =A0That alone would take two chips off my > board and maybe more. 5V tolerance is now way off FPGA's radar, and barely on CPLD radar. AFAIK, only Lattice mention 5V on a modern CPLD, and Atmel's newest CPLDs have measured ESD breakover at ~5.4V, but spec only to 4.6(MAX) It IS more on uC vendors Radar, and growing : The new Cypress PSoC are 5V, as is the Nuvoton M0 series. This allows direct drive of PowerFETS, and better ADC S/N. Code memory remains a large PLD chestnut, but I did see this news : http://www.eeproductcenter.com/memory/brief/showArticle.jhtml?articleID=3D2= 25300289 ["Kilopass Technology Inc. Wednesday (June 2) unveiled a one-time programmable 4-Mb non-volatile memory IP product large enough to store the firmware and boot code that is traditionally stored in external serial-flash EEPROM chips. "] Importantly, this reaches from 180-nm to 40-nm, so can get to many of the FPGAs, but has only just been announced, so any PLD use will be 2+ years off... -jgArticle: 147949
On Jun 3, 5:42=A0pm, -jg <jim.granvi...@gmail.com> wrote: > On Jun 4, 3:24=A0am, rickman <gnu...@gmail.com> wrote: > > > I remember right? =A0Is X coming out with an FPGA with an ARM? > > Yes, but not at the "convenient" package end of the spectrum!!. This > is an ARM9, and mentions 28nm > > It will STILL need Code memory, so only moves the problem. How do you know what end of the package spectrum it will be? I seem to recall that the V2P parts had a smaller part with just one processor, I don't recall the min package size. As to the code memory, I am not trying to run Linux, so I can use the internal block rams for what I do. 16 kB would be tons of room. My current part has only 6 kB, 2 kB are taken up for a delay buffer and another is used for the internal stacks. > > Personally, I prefer something other than an ARM inside an FPGA. =A0I > > want a CPU that executes each instruction in a single clock cycle and > > has very seriously low interrupt latency. =A0That is why I designed my > > own CPU at one point. =A0ARM CPUs with FPGAs seem to be oriented to > > people who want to use lots of memory and run a real time OS. =A0Not > > that that an ARM or a real time OS is a bad thing, I just want > > something closer to the metal. > > =A0Nice here would be what I'd call an Asymmetric-multi-core - a CPU > that is deliberately split into a smaller hard real time portion, and > a more general OS level portion. > =A0Usually the bits you want hard real time, are NOT large, and it helps > to ring fence them from a moreo general layer. > =A0The XMOS hard thread-slices are one way to achieve this. Assuming you *need* the OS level portion. If I understand the XMOS device, they have pipelined their design, but instead of trying to use that to speed up a single processor, they treat it as a time sliced multi-processor. Zero overhead other than the muxing of the multiple registers. The trade off is that each of the N processors run as if they are not pipelined at 1/N of the clock rate. I guess there may be some complexity in the interrupt controller too. So for the cost of 1 processor in terms of logic, they get N processors running concurrently. I may take a look at doing that in my processor. The code space could even be shared. > > If I could get a good MCU in an FPGA (which would certainly have some > > adequate memory) in a "convenient" package, that would really make my > > day. =A0I don't have to have the analog stuff, but 5 volt tolerance > > would certainly be useful. =A0That alone would take two chips off my > > board and maybe more. > > 5V tolerance is now way off FPGA's radar, and barely on CPLD radar. But for no good reason. They can make any of the new processes 5 volt tolerant if they have the will. MCUs still have it at small geometries that are the same as FPGAs from a couple of years back when they said the same thing about it being way past the horizon. Besides, if you keep shrinking the logic, what the heck are you going to put in there when the logic uses a quarter of the space between the pads? It is only the high end parts that need the geometries to continue to shrink. FPGAs are a product driven by the technology as much as by the market. The makers only serve a portion of the market they could because they don't want to spend any time or effort beyond the low hanging fruit. At some point, especially if the economy continues in the crapper, they will need to reach a little higher to find profit in areas that take a bit more work. Either than or start down that slope of becoming a commodity product. > AFAIK, only Lattice mention 5V on a modern CPLD, and Atmel's newest > CPLDs have measured ESD breakover at ~5.4V, but spec only to 4.6(MAX) > > It IS more on uC vendors Radar, and growing : > The new Cypress PSoC are 5V, as is the Nuvoton M0 series. This allows > direct drive of PowerFETS, and better ADC S/N. Exactly! If the MCU makers can do it, the FPGAs can as well. The big difference between FPGAs and MCUs as I see it is that MCUs tend to have significant analog while FPGAs remain purely digital. That excludes FPGAs from a portion of the market. When the FPGA companies decide they can't ignore that market anymore (perhaps spurred by success of combo parts like PSoC and Fusion) the processing compromises should allow 5 volt tolerance to be easy if not free. > =A0Code memory remains a large PLD chestnut, but I did see this news : > > http://www.eeproductcenter.com/memory/brief/showArticle.jhtml?article... > > ["Kilopass Technology Inc. Wednesday (June 2) unveiled a one-time > programmable 4-Mb non-volatile memory IP product large enough to store > the firmware and boot code that is traditionally stored in external > serial-flash EEPROM chips. "] > > Importantly, this reaches from 180-nm to 40-nm, so can get to many of > the FPGAs, but has only just been announced, so any PLD use will be 2+ > years off... I'm not sure I follow. What is significant about this part? FPGAs currently have lots of ram that can be used for code storage. I seem to recall that in one of the newer families the block ram has even been partitioned into smaller blocks for logic type use and larger blocks which would be more for CPU type apps. Is that the high end Altera parts? I know they have 6 input LUTs now and I may be getting mixed up with that. Rick
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z