Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Wed, 4 Feb 2009 09:01:27 -0800 (PST), Antti <Antti.Lukats@googlemail.com> wrote: >Hi > >with less delay than last month >http://groups.google.com/group/antti-brain/files > >but probably even less polished :( >well with monthly frequency, it has to be monthly released.. so it is >out. > >i have received very little feedback so well. >the little, well i appreciate it :) > >ah, there is also picture of 0.5mm BGA soldered on >PCB made without any special technologies, 2 layer no microvia >no super small track/space > >Antti Antti; I agree that the older versions of ISE (i.e. 6.x) work better with CPLDs than the newer (i.e. 10.1) ISE versions. However, I don't think that means that Xilinx CPLDs shouldn't be used anymore. -Dave PollumArticle: 138051
On Feb 4, 9:46=A0pm, Dave Pollum<d...@x.com> wrote: > On Wed, 4 Feb 2009 09:01:27 -0800 (PST), Antti > > > > <Antti.Luk...@googlemail.com> wrote: > >Hi > > >with less delay than last month > >http://groups.google.com/group/antti-brain/files > > >but probably even less polished :( > >well with monthly frequency, it has to be monthly released.. so it is > >out. > > >i have received very little feedback so well. > >the little, well i appreciate it :) > > >ah, there is also picture of 0.5mm BGA soldered on > >PCB made without any special technologies, 2 layer no microvia > >no super small track/space > > >Antti > > Antti; > I agree that the older versions of ISE (i.e. 6.x) work better > with CPLDs than the newer (i.e. 10.1) ISE versions. =A0However, I > don't think that means that Xilinx CPLDs shouldn't be used > anymore. > -Dave Pollum Hi Dave, nice to get your attention and feedback :) eh.. considering that i had selfcensored some 3 pages of xilinx !? from the january issue the leftovers are pretty harmless.. ;) I agree with you actually, i should not have written it such strong wording.. right for designs where 99% of the component cost is XC9536, then XC9536 should be used, one example of such design is there (not mine!!) http://savemii.net/ BOM cost below 2$ sale price 30$ profit can be calculated? you think so? the profit for that product is NEGATIVE -- profit loss actually. some folks discovered that the same function the XC9536 does on that board can be achieved by a special keypress combination on the gamepad... ok, sure the use of XC9536 made the profit losses smaller for the savemii guys. =3D=3D I have myself a design which is based on XC9536 only, a GPIO adapter thingie it is REALLY nice and useful, i use it as JTAG interface, SPI/I2C whatever i ever need... but i never dared to try to sell it for $30, well stupid me.. the development time HAS to be calculated into sale price. but then if you know your gadget has selfcost below 2$ and that it can be cloned with 0 NRE within days, then putting a high price tag hardly makes any sales either. but yes, for designs that call for ONEDOLLARCPLD then XC9536 is the one. if however the ONEDOLLARCPLD doesnt implement ALL the design, then the choice of components has to be taken more seriously. single CPLD 2$ maybe also ok or 1$ CPLD + 0.8$ flash MCU but as soon as the BOM cost goes for some reason over 4$ CPLDs are most likely out already. like if all design can fir 3.5$ flash FPGA or 3.5$ flash FPGA + 0.8$ flash MCU replacing them with CPLD or CPLD+mcu would have much higher costs. sure there are desing constraints that may call for a CPLD no matter its high costs. Like one of my last CPLD designs used Altera MAXII. it had to work without any freerunning clocks not on board not inside IC's. =3D=3D Xilinx has not done really anything new with their CPLDs except making their tool support for CPLD worse over the years. So that makes me think really hard before using xilinx CPLDs (unless really required to use them for whatever reason) Antti PS if someone wonder why i have time typing here.. answer is Xilinx 10.1 tools like to self-terminate too often so each time it happens i punch some keys. (I dont have the boxing sack with Xilinx logo yet)Article: 138052
On Feb 5, 9:47=A0am, Antti <Antti.Luk...@googlemail.com> wrote: > Xilinx has not done really anything new with their CPLDs > except making their tool support for CPLD worse over > the years. So that makes me think really hard before > using xilinx CPLDs (unless really required to use them > for whatever reason) Hi Antti, Paste: ["Xilinx XC9500 Talked a fellow engineer yesterday, he had a small change to make to an old existing and working XC9500 CPLD Design. It all used to work with ISE 8.1 but with ISE 10.1 it doesn=92t fit any more (also before doing any changes). This lines up with my own experience, CPLD design done with 6.3 did not work with 8.1 tools. As there has been no new developments on Xilinx CPLD=92s for a very long time they should be considered really as DO NOT USE."] There seem to be many issues here. a) Poor Version control. There is no excuse to break an existing flow. b) CPLDs are smothered under FPGA changes, so some of the tool-flows are pushed by FPGA, and collateral-damage results. Vendor shrugs shoulders. c) Path flow changes. Sure, might have some benefits, but do not break the old paths. Good engineering offers both. Perhaps these companies need to offer SEPARATE tool chains, with a SMALLER download for CPLDs and a guarantee of (at least) stability ? Can anyone from Xilinx show a case where a tool revision worked BETTER on CPLDs ? -jgArticle: 138053
On Feb 5, 9:47=A0am, Antti <Antti.Luk...@googlemail.com> wrote: - Xilinx has not done really anything new with their CPLDs - except making their tool support for CPLD worse over - the years. So that makes me think really hard before - using xilinx CPLDs (unless really required to use them - for whatever reason) On this topic, does anyone know what happened to the MAX III CPLDs from Altera ? - seem to have got the chop, without ever making release ? There IS (some) new CPLD activity from Lattice and Atmel (and maybe Actel, if you include their small FlashFPGA? ) Atmel's range is narrow, but their new ATF1508RE has a good price/ feature point. Lattice fixed some blindspots with their 4000ZE family respin. -jgArticle: 138054
On Feb 4, 2:09 pm, rickman <gnu...@gmail.com> wrote: > On Feb 4, 8:42 am, jleslie48 <j...@jonathanleslie.com> wrote: > > > If I didn't answer then I apologize. The answer is I don't understand > > the question. It appears to me from this screen shot: > > >http://jleslie48.com/fpga_uartjl_01/11jlmod/ccuart01/screencap/screen... > > > that my serial line ( rs232_tx_data) idles for 955ns based on the > > whole system waiting for > > uart_reset_buffer. If that is not enough, then that was not made > > clear to me. I had some feelings > > that the issue lied somewhere with that, hence I supplied very careful > > screen caps of the waveforms, > > hoping someone would point out how to read them. I have never worked > > with waveforms/timing > > diagrams before. > > Let me jump back into the discussion here. You seem to be getting > frustrated with the slowness of the learning curve. Personally, I > found HDLs to be hard to learn, but I think this is made worse when > you are very used to designing software (as I have said before). So > just take a deep breath and accept this fact. Many have dealt with > this before and lived through it. In fact, maybe there is something > good that can come of this. At this point there are tons of good HDL > books out there, but I have not seen one specifically written for > people with strong software backgrounds. Maybe we can work on a book > together. You can be the guinea pig! ;^) > > I'm not totally clear on the current state of the discussion and some > of this seems to be focusing on details that may or may not be > important at this point. Timing diagrams are very simple. They are > just graphs of signal values as a function of time. I am sure you get > that, but you are just not accustomed to debugging with them. The > main reason for using them to debug hardware is because using the > alternative, code breakpoints and stepping, is rather tedious in HDL > since the information is presented serially as it happens. A waveform > display gives you a lot of information in parallel with essentially > random access as you view it and figure out what is important or what > is working and what isn't. > > I am not sure what is currently working and what isn't in your > design. I will say that I think I have read that you are testing on > hardware and something there is not working, I guess the first > character is not being received by the other UART? Is that UART also > in the FPGA or are you testing with a PC? Before you worry about > testing hardware, I recommend that you run the simulation to show the > circuit is working there. It is a hundred times easier to see what is > happening in simulation than in a test fixture. > > Given that, I can't see anything in your simulation capture that shows > anything wrong. It appears that sending data to the TX FIFO is > working. The data sequence seems to be F23456789A123456 which is 16 > chars, filling the FIFO as shown by the tx_buffer_full flag going > high. After the first char is written to the FIFO I see an enable > pulse on uart_en_16_x_baud and one clock later I see the rs232_tx_data > go low. This all seems to be working as expected. I assume the baud > enable is 16x the actual bit rate, so the simulation time is not long > enough to watch the actual data emerge. Have you checked this to see > that the simulation is transmitting the data correctly? If the data > is being received by the FPGA UART, are all 16 chars received > correctly? > > I am assuming that all of the above is true that the simulation shows > things working correctly and that your only problem shows when you > test on hardware. That would make sense given Jonathan's suggestion > that you delay the start of your data transmission so that the > receiving UART can clear out any garbage from the startup sequence. > The startup delay may not be needed in simulation depending on the > details of the startup sequence. This is one of the places where > simulation can differ from the real world. In the real world there > are often uncontrolled variables such as arbitrary delays, that are > difficult to reproduce in simulation. > > Do I understand the current situation? > > > And while this might seem hopeless to you, I have no choice in the > > matter. I have no > > funding at this time to bring on additional personnel and quite > > frankly, prior to this > > discussion, I wouldn't of even been qualified to hire someone; one of > > my problems in > > the past. So to that level, this conversation has been productive for > > me at least. > > The funding may be an issue. Even if you don't have funds to bring a > consultant on board, you should get some training in this rather than > try to tough it out yourself. I am sure you will eventually get it > done, but it will be a much more expensive process and take a lot more > time. > > > I am truly sorry my issues are so entry level, but we all must start > > somewhere. Every programming > > environment I've ever had to learn (and there have been many) has had > > by page 3 demonstrated the > > basic "hello world" program. I have been through many websites and > > textbooks and for this environment, > > it is very suspicious by its absence that the "hello world" program is > > missing. > > Don't sweat your inexperience. Every one of us dealt with the same > problems starting out. Usually the "hello world" program equivalent > is the "traffic light state machine" when designing HDLs. Remember, > this is *not* a programming language, it is a hardware *description* > language. When used for synthesis, you really need to get used to the > idea that you are NOT writing a program, you are describing hardware > to be synthesized. Of course that is easy for me to say. I started > out playing with JK FFs wired together on perf board powered by a 6 > volt lantern battery. :^) Had 8080 CPU chips not been over $100 at > the time, I might be a total software junkie now instead. > > Still, although your UART project may be a bit advanced for a > beginner, it is not absurd and I expect you will be able to complete > this shortly. If you find the UART is too frustrating, you might want > to drop back to the traffic light program. > > > ~ You've now futzed around for about three quite long days > > ~ and you still haven't got UART 101 working reliably. > > ~ Surely you're smart enough to see that the right solution > > ~ is to start again, properly? > > > Actually, I've been futzing around for 2 months and have "started > > again" > > 4 times already. Actually my associate has been futzing for almost > > 6, > > and this version is his interpretation of how to program. This > > version, > > warts and all, is the first is the first to at least send ANYTHING > > out. > > I think the above is an important data point. 2 months and 4 trials > should be a sign that this method is not working well. The fact that > someone has been trying this stuff for 6 months shows that you likely > will not be ready to tackle a project for over a year! > > > I think this messed up version is very close to working properly, and > > to abandon it now would be to miss out on understanding something > > important I am missing. Effecting a fix properly I still feel is > > important > > before adding another variable to the equation (new code with new > > issues) > > I don't want to sound like I am beating on you, but I think there are > problems of approach and expectations that are much bigger than the > technical issues. My first HDL project happened because I took a > training class for Orcad schematic software that I was going to use > for an FPGA design (HDL was not so universally used at that time). > The last day of training taught VHDL. I was so impressed with the > potential that I recommended that we use it on the project instead of > schematics. As it turns out my greenness allowed me to miss the fact > that the Orcad HDL tools blew big chunks. After spending a month or > maybe two working with the Orcad synthesis tool I realized that it was > never going to work and we bought the Xilinx tool (a third party tool > bundled with the Xilinx back end). That worked much better. Looking > back on my coding style, I realize that much of that code would likely > not pass muster by my current standards. For example, I was using '-' > as a don't care condition. I now know that this is not a good idea as > it is not universally supported. I won't even think about the style I > used back then... actually some of my style from that first project > has served me well, but you get the idea. Your first project will > likely not be anything that you are proud of a year later. > > > That's the way I debug, understand what you have first, try and fix, > > and then break it down to a streamlined scalable procedure. > > That is fine for debugging, but you are primarily in a learning mode. > That works much better when you take bite sized pieces and learn a bit > at a time rather than to try to get a much more complex effort done > that involves learning on many different levels. > > I think that when you complete the "hello world" example, you will > still have a long way to go before you are ready to try a project. > The fact that waveform displays are still new and alien to you is a > *very* good indicator that you are still in the elementary school of > HDL design. > > The more I think about it, the more I like the idea of writing a > book. I don't currently have any design work, maybe I should take > that on as a project. > > Rick Hey rick, let me bring you up to date. ~ Given that, I can't see anything in your simulation capture that shows ~ anything wrong. It appears that sending data to the TX FIFO is ~ working. The data sequence seems to be F23456789A123456 which is 16 ~ chars, filling the FIFO as shown by the tx_buffer_full flag going ~ high. After the first char is written to the FIFO I see an enable ~ pulse on uart_en_16_x_baud and one clock later I see the rs232_tx_data ~ go low. This all seems to be working as expected. I assume the baud ~ enable is 16x the actual bit rate, so the simulation time is not long ~ enough to watch the actual data emerge. Have you checked this to see ~ that the simulation is transmitting the data correctly? If the data ~ is being received by the FPGA UART, are all 16 chars received ~ correctly? That's my big issue, The simulation, seems to show everything is fine, but when I download the code to the board, hook up the UART to my PC terminal session, I get 23456789a123456f, (the F is last rather than first) but the simulation as you noticed as well look perfectly fine: http://jleslie48.com/fpga_uartjl_01/11jlmod/ccuart01/screencap/screencap15_firstislast100xdelay.png also through testing I know this to be true: I make the message string: constant project_name_stg : string := "123456789a12345"; aka, 15 characters, all is well, I get on my pc terminal: 123456789a12345 I make it 16 characters: constant project_name_stg : string := "f23456789a123456"; my output becomes: 23456789a123456f and the 17 character string: constant project_name_stg : string := "f23456789a1234567"; yields the output: 23456789a1234567 so it was suggested that I need more settling time in the beginning, so I have now added a delay, you will note in the screencap 15 above is now starting at the 120us time area vs the older version that started at the 1.2us time area. No change. I've spent most of this afternoon starting to clean up the code. I've collapsed the print mutex and the walk of the message to one process: ------------------------------------------------------------------------------------------------- -- P10: INITIALIZING lprj PROJECT MESSAGE COUNT ( project_name_cnt ) ------------------------------------------------------------------------------------------------- -- 090204 JL combined with P9, this will schedule each character of the project name -- and send each of its characters to the uart queue. -- -- P10: PROCESS ( CLK_16_6MHZ, UART_RESET_BUFFER) BEGIN I01: IF ( rising_edge(clk_16_6mhz)) THEN I02:IF ( ( UART_RESET_BUFFER = '0' ) AND ( UART_RESET_NEXT = '1' ) ) THEN project_name_cnt <= project_name_stg'low; lprj_MESS_TRAN <= '1'; ELSIF ( ( lprj_MESS_TRAN = '1' ) AND ( TX_WRITE_BUFFER_STB = '1' ) ) THEN I03: IF ( project_name_cnt /= project_name_stg'high ) THEN project_name_cnt <= ( project_name_cnt + 1 ); ELSE lprj_MESS_TRAN <= '0'; END IF I03; END IF I02; END IF I01; END PROCESS P10; ------------------------------------------------------------------------------------------------ Still no change.Article: 138055
Hi All i am trying to write a procedure in VHDL which i want to generate clock and on the rising edge of the clock i want to shift data on the rising edge . When i use the clause Rising_edge(clk) it is not doing anything and no output is seen .. Can anyone help as how to generate a clock inside the procedure and then shift data on that clock. I want the output of the procedure to give serial data on its generated clock Thanks in advance VipulArticle: 138056
I think it is probably better design practice to use one procedure to generate the clock, then another process sensitive to this clock edge From rgaddi@technologyhighland.com Wed Feb 04 15:03:51 2009 Path: flpi142.ffdc.sbc.com!flph199.ffdc.sbc.com!prodigy.com!flph200.ffdc.sbc.com!prodigy.net!newshub.sdsu.edu!Xl.tags.giganews.com!border1.nntp.dca.giganews.com!nntp.giganews.com!local02.nntp.dca.giganews.com!nntp.lmi.net!news.lmi.net.POSTED!not-for-mail NNTP-Posting-Date: Wed, 04 Feb 2009 17:03:47 -0600 Date: Wed, 4 Feb 2009 15:03:51 -0800 From: Rob Gaddi <rgaddi@technologyhighland.com> Newsgroups: comp.arch.fpga Subject: Re: help in VHDL procedure programming Message-Id: <20090204150351.d0d55df4.rgaddi@technologyhighland.com> References: <63554076-3cf7-4aab-a432-079cc4085430@g3g2000pre.googlegroups.com> Organization: Highland Technology, Inc. X-Newsreader: Sylpheed 2.5.0 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Lines: 28 X-Usenet-Provider: http://www.giganews.com NNTP-Posting-Host: 66.117.134.49 X-Trace: sv3-gMwgOVft1HG9cuWX1nHlHdH9RUBJCn56maR8a0bJIT+6ZfWuj8b67BfMVpav4JFYDqf+mak3HJ36Qvv!JaEh+0wkyqZI5JUAImxmVluMpmlsuP03KUIYvgUlUk16yMoldc154Mwaov9vjSTLD4vxfeMHUD7x!W2t7P2qjvvyLBXP9wc8= X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.39 Xref: prodigy.net comp.arch.fpga:151177 X-Received-Date: Wed, 04 Feb 2009 18:03:48 EST (flpi142.ffdc.sbc.com) On Wed, 4 Feb 2009 14:24:18 -0800 (PST) VIPS <thevipulsinha@gmail.com> wrote: > Hi All > > i am trying to write a procedure in VHDL which i want to generate > clock and on the rising edge of the clock i want to shift data on the > rising edge . > > When i use the clause Rising_edge(clk) it is not doing anything and no > output is seen .. > > Can anyone help as how to generate a clock inside the procedure and > then shift data on that clock. > > I want the output of the procedure to give serial data on its > generated clock > > Thanks in advance > > Vipul Here's a question with a can of worms in it: Do you plan for this to be synthesizable, or is this just for testbenching? -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 138057
OK, I looked pretty hard at your code, and I am somewhat stumped. There is so much that is just so far away from what I would do - not the line-by-line minutiae of coding, but the overall approach. Two really big things that make it close to impossible to debug in a reasonable time: (1) You have several distinct processes, each taking responsibility for a block of activity. Consequently they have to hand-off from one to another in complicated ways. Very hard to manage correctly. It is far easier to have one process take responsibility for any single thread of activity. (2)You've just GOT to learn about state machines. Using some unholy boolean mess of flags to decide what to do on each clock is a recipe for disaster. So....... you'll probably be cross about this, or simply ignore it, but.... I've written a data generator to create your message strings. It's very incomplete so far, but it stores its information in a Xilinx blockRAM configured as ROM, and allows you to freely intermix messages and time delays. It could easily be extended to allow loopback tests, repeating messages and suchlike. Here's how you instantiate it to create two "hello world" messages: JSEB_DATA_GENERATOR: entity work.data_gen generic map ( PC_bits => 9 -- Width of ROM address , the_program => -- Long startup delay op_DELAY & 200 & -- Send the first message op_MESSAGE & tua("Hello World") & EOM & -- short delay op_DELAY & 20 & -- send the second message op_MESSAGE & tua("Goodbye") & EOM & -- Freeze up so we do nothing more op_HALT ) port map (.....) Nice, yes? op_DELAY, op_MESSAGE, op_HALT and EOM are single-byte constants. And I've rewritten the top module to use this alongside your existing UART transmitter. You should be able to try it out rather easily and tell me whether it works - I've only tried it in simulation so far, but I'll try it on my little demo board tomorrow. The sources are at http://www.oxfordbromley.plus.com/files/loki/ and I hope the comments are sufficiently useful. cheers -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 138058
Enes Erdin <eneserdin@gmail.com> wrote: > Long time passed since the last post but I wonder your opinions about > this subject for today's technology. I am using a core2duo 1.8 GHz PC > and for my design P&R takes 40 + minutes and want to buy a new PC > powered by an AMD processor becasue of tis price but I can think some > other configuration if it is worth, related to your opinions about : > > - dual core, quad core comparison > - cache for the processor camparison I have a triple core phenom, which I bought because of the low price. They don't only come on powers of two. As far as I know, most parts of the tools aren't multithreaded, so extra cores don't help so much. You might run more than one at the same time, though. -- glenArticle: 138059
In article <9l9jo4dh2jaf45ko2vvadc51bn4chdcnt4@4ax.com>, Brian Drummond <brian@shapes.demon.co.uk> wrote: >On Tue, 3 Feb 2009 15:12:41 +0000 (UTC), jhallen@TheWorld.com (Joseph H Allen) >wrote: > >>I'm surprised that the Spartan-6 integrated memory controller does not support >>DIMMs. Also surprised that there are no integrated memory controllers in >>Virtex-6. >Reading between the lines here... >At a guess the V6 I/O blocks are fast enough to support DDR memory quite well >without special support - and that way you have the flexibility to support any >configuration you need (modulo SSO limitations; the tools will handle those) I'm thinking an integrated memory controller would be valuable in Virtex-6 because then designers would not have to go through the effort of using MIG. It works, but is still a lot of effort to make a high performance memory interface. Also why waste LUTs on a memory interface? I want the LUTs for my design, not for glue logic to make their chip work with SDRAM. Finally, DDR3 support is not really integrated in MIG. -- /* jhallen@world.std.com AB1GO */ /* Joseph H. Allen */ int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0) +r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2 ]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}Article: 138060
"Manny" <mloulah@hotmail.com> wrote in message news:58397ef9-070d-4449-b218-b192cb7af61b@t39g2000prh.googlegroups.com... > Hi, > > Until now, I've interfaced most of my cores to the > external system using standard 2/4-phase asynchronous handshake > protocols. Was wondering how good of a practice is this and whether > there is a definitive handbook out there in the literature that > tackles these issues. Don't want to sink into SoC bussing literature > with overly complicated arbitration since most of the designs I work > on are of the simple and tightly-coupled sort. > I find Altera's Avalon interface to be a good choice, it is simple from a logic perspective and it is well suited to most 'inside a single chip' communication and it scales well. It is not constrained to or in any way 'optimal' with only Altera parts either. For simple interfaces that only need wait states and do not have read latency, Avalon and Wishbone (see opencores.org) are essentially identical except for signal naming convention. For things that do have read cycle latency, Avalon has it built right into the specification, Wishbone adds it in an ad hoc manner. Most DRAM controllers (and potentially other 'off chip' interfaces) need to be able to implement read cycle latency(1) in order to get maximum throughput. Kevin Jennings (1) Read cycle latency is when the address and command are allowed to change before the data from the read command has been returned. This allows multiple commands to be queued up with the slave device while it is completing the read command.Article: 138061
> Enes Erdin <eneserdin@gmail.com> wrote: > >> Long time passed since the last post but I wonder your opinions about >> this subject for today's technology. I am using a core2duo 1.8 GHz PC >> and for my design P&R takes 40 + minutes and want to buy a new PC >> powered by an AMD processor becasue of tis price but I can think some >> other configuration if it is worth, related to your opinions about : >> >> - dual core, quad core comparison >> - cache for the processor camparison > I have a triple core phenom, which I bought because of the low price. > They don't only come on powers of two. Yes they do, AMD's triple cores are four cores with one turned off. This was required to increase yields to a profitable point. ---Matthew HicksArticle: 138062
-jg wrote: >> It more a question of die space. 3.3v tolerant IO is huge in 40/45nm. >> In V6 they are trying to reach the high end of IO and LUT counts. On >> the other hand with S6 the LUT counts are not that high and also the IO >> count is lower than in V6 so they can waste space for the IO. > > Die Size, are you sure ? > - My understanding is Oxide thickness is what primarily determines IO > Voltage Specs. > > Die area (PAD IO area) more determines drive current. Oxide thickness is one parameter in the equation. As you mentioned drive current affects also heavily the pad area, and for 3.3v that is a problematic area. I should have said just 3.3v IO not 3.3v tolerant IO. 3.3v can be done in 40nm process without major tweaking, but you pay in the IO size. For example in CycloneIII (65nm part) the maximum drive current for 3.3v LVCMOS IO is 2mA which is very small making the IO standard almost unusable (8mA with LVTTL which is not so great either). --KimArticle: 138063
Bob Smith wrote: > Would it make sense to build one serial port and try to > time division multiplex it to the sixteen sets of inputs, > perhaps keeping all state information in RAM? Is this a > feasible or reasonable approach? Thanks Glen, JG, Rick. I may try this with something a little easier like a PWM controller. thanks Bob SmithArticle: 138064
Hi, I am in the middle of writing VHDL to read from one block-ram and write the read data into another block-ram, both same data witdth and address width. Device is a Spartan-3A. Now, block ram data is presented at the output of the RAM to read on the next clock cycle, and this leads to a shift in address data in the RAM to write. Address 0 becomes 0 (reset value on data bus) Address 1 get data from Address 0 of source RAM, address 2 get data from address 1 on source RAM etc. From what I have read about this, the data is available a few nanoseconds after the edge, so one suggestion was to use the falling edge to read data. The process that I wrote to do this doesn't allow me to do this so I just inserted an extra register on the write address to align the address with data for the write operation. There are probably several snags to watch for doing things this way, and I thought maybe I should ask how other Xilinx users solve this "problem" -- SvennArticle: 138065
On Feb 4, 4:28 pm, jleslie48 <j...@jonathanleslie.com> wrote: > > Hey rick, > > let me bring you up to date. > > On Feb 4, 2:09 pm, rickman <gnu...@gmail.com> wrote: > ~ Given that, I can't see anything in your simulation capture that > shows > ~ anything wrong. It appears that sending data to the TX FIFO is > ~ working. The data sequence seems to be F23456789A123456 which is 16 > ~ chars, filling the FIFO as shown by the tx_buffer_full flag going > ~ high. After the first char is written to the FIFO I see an enable > ~ pulse on uart_en_16_x_baud and one clock later I see the > rs232_tx_data > ~ go low. This all seems to be working as expected. I assume the > baud > ~ enable is 16x the actual bit rate, so the simulation time is not > long > ~ enough to watch the actual data emerge. Have you checked this to > see > ~ that the simulation is transmitting the data correctly? If the data > ~ is being received by the FPGA UART, are all 16 chars received > ~ correctly? > > That's my big issue, The simulation, seems to show everything is > fine, > but when I download the code to the board, hook up the UART to my > PC terminal session, I get 23456789a123456f, (the F is last rather > than > first) but the simulation as you noticed as well look perfectly fine: > > http://jleslie48.com/fpga_uartjl_01/11jlmod/ccuart01/screencap/screen... > > also through testing I know this to be true: > > I make the message string: > constant project_name_stg : string := "123456789a12345"; > > aka, 15 characters, all is well, I get on my pc terminal: > 123456789a12345 > > I make it 16 characters: > constant project_name_stg : string := "f23456789a123456"; > > my output becomes: > 23456789a123456f I think Jonathan or someone else suggested that if the F that should be the first character is showing up as the last character, this is *clearly not* an issue of startup delay. Something is wrong either with the way you are writing into the FIFO or perhaps the FIFO code itself is not correct. What may well be happening is that the first character written into the FIFO is being skipped by the UART. When you write exactly 16 characters somehow the UART output counter is getting messed up and skips to the second character. Of course this is speculation, but the fact that the 15 char message works ok shows it is not a startup issue. The 17 char message confuses me though. It skips the first char and only outputs 16 chars. Is there an output from the UART that acknowledges the strobe when you write to the tx fifo (other than the full flag)? If not, then the strobe should be accepted at any time. If there is a handshake flag, this needs to be checked before writing to the FIFO again that that might explain the mixup on 16 or more chars. One last comment on the FIFO, check the docs and see if you can actually write 16 chars to it without pause. To handle 16 chars, it needs to have an extra bit to account for 17 states (empty and 1 to 16 chars). If this extra state is omitted, it could explain the symptoms. Since you have the UART code, it should be easy to find the internal in and out counters and monitor them as it runs the 16 char case. > and the 17 character string: > constant project_name_stg : string := "f23456789a1234567"; > yields the output: > 23456789a1234567 Yeah, this is the one that sounds like it is overwriting the first char or otherwise not being handled correctly. Is the reset to the UART being handled? Do you have a image of the simulation waveform? I would like to see the same time period as the screencap13_firstislast16.png where it shows the writes to the FIFO. It would also be useful to see the handshakes between the FIFO and the UART. This is an HDL UART, right? > so it was suggested that I need more settling time in the beginning, > so I have now added a delay, you will note in the screencap 15 above > is now starting at the 120us time area vs the older version that > started > at the 1.2us time area. > > No change. At this point I don't think it is startup issues because the 15 char case works. Something is wrong with the FIFO. Does the UART have a flag from the transmitter shift register to say it is empty? If so, you can set up a handshake with that to control the write to the FIFO so that the FIFO never has more than one char. That should chase away the problem and show that it is in the FIFO interface. I got your email and will give a reply tomorrow. RickArticle: 138066
On Feb 4, 11:09=A0pm, Matthew Hicks <mdhic...@uiuc.edu> wrote: > > Enes Erdin <eneser...@gmail.com> wrote: > > >> Long time passed since the last post but I wonder your opinions about > >> this subject for today's technology. I am using a core2duo 1.8 GHz PC > >> and for my design P&R takes 40 + minutes and want to buy a new PC > >> powered by an AMD processor becasue of tis price but I can think some > >> other configuration if it is worth, related to your opinions about : > > >> - dual core, quad core comparison > >> - cache for the processor camparison > > I have a triple core phenom, which I bought because of the low price. > > They don't only come on powers of two. > > Yes they do, AMD's triple cores are four cores with one turned off. =A0Th= is > was required to increase yields to a profitable point. Do you know this for a fact? When it comes to PCs I have read a lot of rumor that gets repeated so much that it becomes "fact". Did AMD say this or someone actually open the package to see the die or some other way of verifying that it is the same silicon? I ask because when the tricore part was about ready for release I read that it was a new chip layout. Essentially they were shooting for a sweet spot where they got more performance than a dual core, but lower price than a quad core. Not that there is anything wrong with a "binned" part. RickArticle: 138067
On Feb 4, 8:28 am, Enes Erdin <eneser...@gmail.com> wrote: > Long time passed since the last post but I wonder your opinions about > this subject for today's technology. I am using a core2duo 1.8 GHz PC > and for my design P&R takes 40 + minutes and want to buy a new PC > powered by an AMD processor becasue of tis price but I can think some > other configuration if it is worth, related to your opinions about : > > - dual core, quad core comparison > - cache for the processor camparison > > Your suggestions will guide me. I have no first hand knowledge of this, but when I discussed this back in the days of dual cores being two CPUs on one motherboard, it was claimed that larger cache was not of much value because of the huge data sets. Essentially it was claimed that performance was based on CPU clock speed and memory speed, I want to say more on the memory speed than the CPU clock even. The issue of number of CPUs I am not at all sure of. I can't imagine that with nearly all new PCs having at least two cores, that they wouldn't be taking advantage of them sometime soon. If you could improve the performance of your software by even just 50%, wouldn't that be worth some effort? Certainly that would be good for marketing! RickArticle: 138068
On 4 =AAubat, 23:34, rickman <gnu...@gmail.com> wrote: > I have no first hand knowledge of this, but when I discussed this back > in the days of dual cores being two CPUs on one motherboard, it was > claimed that larger cache was not of much value because of the huge > data sets. AMD and INTEL people always discuss such topics but it is really confusing. Your explanation makes sense for me about sythesis and P&R. For software guys larger cache will be good, however, I really wonder and don't know its effect for P&R. One of my colleagues insists on larger cache but larger cache almost doubles the price in my country. >=A0Essentially it was claimed that performance was based on > CPU clock speed and memory speed, I want to say more on the memory > speed than the CPU clock even. I feel that the amount and the speed of the memory really affect the process. > If you could improve the performance of your software by even > just 50%, wouldn't that be worth some effort? > Certainly that would be good for marketing! That's right. By the way I also wish Xilinx tries to improve the performance of ISE :)Article: 138069
On Feb 4, 8:28=A0pm, Enes Erdin <eneser...@gmail.com> wrote: > Long time passed since the last post but I wonder your opinions about > this subject for today's technology. I am using a core2duo 1.8 GHz PC > and for my design P&R takes 40 + minutes and want to buy a new PC > powered by an AMD processor becasue of tis price but I can think some > other configuration if it is worth, related to your opinions about : > > - dual core, quad core comparison > - cache for the processor camparison > > Your suggestions will guide me. > > Regards, > > --enes As others pointed out MAP and PAR are not maltreated, which means you won't get to many benefits (except playing your favorite video game while P&R is running). BUT, if you are running linux, and just need to recompile everything without using the gui, you can typically do that by entering "make bits". If you have multiple cores/cpus, you can replace that with "make -jN bits", where N is the number of cpus/cores you have. This way, make will try to built independent targets in parallel. This unusually speeds up initial synthesis of various blocks. In my case it made a lot of sense to have a multi cpu/multi core system, as I spend more time in verification on an rtl level. Here I structure my test bench such, that I can run various test in parallel .... When considering multi cpu/core systems, you should also evaluate the memory requirements every potential job will have. I decided for my system, that I would be better of with two dual core cpus, each with 8G of memory, over a single PCU with 4 cores. There is not a lot of sense to have 4 cores, if they will be memory bandwidth limited. When/if I decide to upgrade, I will probably go for a 4 CPU system, with 2 or 4 cores per CPU. Also, I try to stick with Opteron CPUs, seem to give the most bang for the money .... just my 2 c ... Cheers, rudiArticle: 138070
On Feb 5, 5:09=A0pm, luudee <rudolf.usselm...@gmail.com> wrote: > > As others pointed out MAP and PAR are not maltreated, which ^^^^^ this should have been multi-threaded ... that's what happens when you blindly rely on spelling checkers .... :P > means you won't get to many benefits (except playing your ...Article: 138071
Hello, I would like to know how "configuration logic" and other functions of fpga are implemented ? For exemple let be a technology of 90 nm , it means that the decryptor uses the same technology or it's an asic that has its own features and is implemented apart ? I am sorry for my silly question but really I need answers... Thank you DajjouArticle: 138072
On 5 Feb., 08:10, Svenn Are Bjerkem <svenn.bjer...@googlemail.com> wrote: > Hi, > > I am in the middle of writing VHDL to read from one block-ram and > write the read data into another block-ram, both same data witdth and > address width. Device is a Spartan-3A. Now, block ram data is > presented at the output of the RAM to read on the next clock cycle, > and this leads to a shift in address data in the RAM to write. Address > 0 becomes 0 (reset value on data bus) Address 1 get data from Address > 0 of source RAM, address 2 get data from address 1 on source RAM etc. > From what I have read about this, the data is available a few > nanoseconds after the edge, so one suggestion was to use the falling > edge to read data. The process that I wrote to do this doesn't allow > me to do this so I just inserted an extra register on the write > address to align the address with data for the write operation. There > are probably several snags to watch for doing things this way, and I > thought maybe I should ask how other Xilinx users solve this "problem" The input data of the second ram is one clock cycle delayed, so you also need to delay the addresses and control signals to this ram by one clock cycle. There are no falling clock edges required for this. Kolja SulimmaArticle: 138073
On Feb 5, 12:49=A0pm, dajjou <swissiyous...@gmail.com> wrote: > Hello, > > =A0I would like to know how "configuration logic" and other functions of > fpga are implemented ? > For exemple let be a technology =A0of 90 nm , it means that the > decryptor uses the same technology or it's an asic that has its own > features and is implemented apart ? > I am sorry for my silly question but really I need answers... > > Thank you > > Dajjou it is defenetly the same technology and same die... if it would be separate then there would be NO security at all but more likely you need to answer some other question not the one that you asked AnttiArticle: 138074
Recent versions of QuartusII make use of multiple cores for some parts of the synthesis/fitter tasks, so there is an advantage there. Others may know better/newer, but I don't think ModelSim uses more than 1 core yet. However, if you have multiple cores, you can use one for surfing while another simulates, etc.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z