Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Antti Lukats wrote: > > WAU, now I have a Flif flop with may name on it! > thanks a lot John I will defenetly try it out! > > Antti You are very welcome, and thank you too. This is the first time since I've been doing FPGA design (XC2064) I've felt compelled to use a post-route timing sim, and it was a good exercise. The sim showed a 1.03 ns delay from clock rising edge to output at the Slave_LUT (smallest Spartan3, -5 speed), not too bad compared to the tcko of .6 ns for the regular FFs. Looks like lots of timing margin given a clock input of 100MHz, but still pay attention to the clock timing difference at the LUT inputs. The delta should be kept well below (Tilo+route_delay(master_out to slave_in)) and well below (Tilo +route_delay(slave_out to master_in) ). There may be a small worry about this circuit on start-up, I'm not sure what the chances are it oscillates a little before settling down, depending on clock phase when the LUTs and routing go active, if that makes any sense (It's after midnight here, I'm going to bed, don't really feel like thinking about it). That's what I'm hoping to hear back from you, if you see any bad behavior. Here's the LUT3 version, including RLOCs. The reset term was to make the simulation work... p.s. The Antti_FF is a minor pun on the inversion in the master stage. Not sure if you, Syms, John_H, or some unknown engineer who's done this circuit in the past should get the credit, but I prefer the pun. ;o) library IEEE; use IEEE.STD_LOGIC_1164.ALL; library UNISIM; use UNISIM.VComponents.all; entity Antti_FF is Port ( Clock : in STD_LOGIC; Div_2 : out STD_LOGIC ); end Antti_FF; architecture Behavioral of Antti_FF is signal Master : std_logic := '0'; signal Slave : std_logic := '0'; attribute rloc : string; attribute rloc of Master_LUT : label is "X0Y0"; attribute rloc of Slave_LUT : label is "X0Y0"; begin Master_LUT: LUT3 -- Master LUT (First in the Master/Slave FF) -- Holds Feedback when clock is high, -- Passes Inverted Data when Clock is Low. -- D=Data C=Clock F=Feedback O=Output -- C: 1 1 1 1 0 0 0 0 -- D: 1 1 0 0 1 1 0 0 -- F: 1 0 1 0 1 0 1 0 -- O: 1 0 1 0 0 0 1 1 generic map ( INIT => x"A3" ) port map ( O => Master, I0 => Master, I1 => Slave, I2 => Clock ); Slave_LUT: LUT3 -- Slave LUT inverts Clock sense, -- Passes Data when Clock is High, -- Passes Feedback when clock is low -- C: 1 1 1 1 0 0 0 0 -- D: 1 1 0 0 1 1 0 0 -- F: 1 0 1 0 1 0 1 0 -- O: 1 1 0 0 1 0 1 0 generic map ( INIT => x"CA" ) port map ( O => Slave, I0 => Slave, I1 => Master, I2 => Clock ); Div_2 <= Slave; end Behavioral;Article: 112276
I, too, am interested in how you compute the CRC. Beware though, I'll soon be releasing this in GPL code :) JBArticle: 112277
"ZHI" <threeinchnail@gmail.com> wrote in message news:1163783381.788279.19070@j44g2000cwa.googlegroups.com... >I implemented an algorithm into fpga board. I wrote the Matrix data to > FPGA board from Matlab and the result came back. I was using a loop to > execute the operation many times. It can operate normally for some > times, but in the end it stopped with the error in the line : > fwrite(sx,datafile,'int8'); > and the error messege showed : serial.fwrite A timeout occurred during > the write operation. How could it happen? How could it execute some > trials but happen time out error at some time? > Are you using handshaking for flow control?Article: 112278
Cardbus is a forward progression path from PCMCIA. Have a look at www.pcmcia.org for the specifications. They do cost unfortunately. The connector for PCMCIA/Cardbus has features to identify which sort of card is plugged in and that is in the spec. There are host controllers from companies like TI if you want some other sources of info. I think Mindshare might have a book too that is worth having. Most laptops made in the last few years, with a "PCMCIA", support the Cardbus extension of the standard. Expresscard is mechanically incompatible and is basically a replacement for PCMCIA/Cardbus and is the laptop equivalent of PCI-E. It is very rapidly replacing PCMCIA/Cardbus in new laptops. John Adair Enterpoint Ltd. vasile wrote: > John Adair wrote: > > Even if you implement Cardbus (essentially PCI) rather than > > PCMCIA(essentially ISA) you won't get continuous 33MHz transfer other > > than short periods of time. ExpressCard format can go this fast > > providing the architecture behind it can support that data rate. > > Hi John, > I didn't heard about this standard before. It's compatible with most > PCMCIA interfaces > available on laptops ? There is somewhere a documentation available ? > > thx, > Vasile > > > > > John Adair > > Enterpoint Ltd. - Home of Tarfesock1. The Cardbus FPGA Development > > Board. > > > > vasile wrote: > > > Hi everybody, > > > > > > A part of a project I'm designing is a PCMCIA bus card with a 32 bit > > > data system bus. > > > The system included in the card has a multicore DSP, an ARM processor, > > > some NOR FLASH, SDRAM memory and a FPGA. > > > The FPGA is used for the PCMCIA interface to the system bus and some > > > high speed math as a companion for DSP. The purpose of the whole PCMCIA > > > interface is to transfer some data from the SDRAM into PC, in real time > > > at 33Mhz clock rate. The card data system bus is running at 133MHz. > > > > > > How you'll chose the design for the best card bus interface, knowing > > > there are some fast processes on the internal bus: > > > > > > a. using the FPGA as a slave memory selected by the DSP and > > > implementing a FIFO inside the FPGA . An interrupt request will notice > > > the PC to start download data and empty the FIFO. > > > b. using DMA control over the system bus from the FPGA (FPGA as master, > > > DSP as slave) > > > c. other (please detail) > > > > > > thank you, > > > VasileArticle: 112279
I am using hardware for flow control. Andrew Holme wrote: > "ZHI" <threeinchnail@gmail.com> wrote in message > news:1163783381.788279.19070@j44g2000cwa.googlegroups.com... > >I implemented an algorithm into fpga board. I wrote the Matrix data to > > FPGA board from Matlab and the result came back. I was using a loop to > > execute the operation many times. It can operate normally for some > > times, but in the end it stopped with the error in the line : > > fwrite(sx,datafile,'int8'); > > and the error messege showed : serial.fwrite A timeout occurred during > > the write operation. How could it happen? How could it execute some > > trials but happen time out error at some time? > > > > Are you using handshaking for flow control?Article: 112280
Hi Dennis, Antti, I'm currently looking at a : EP2C35U484C6 bitstream. On these bitstreams, the CRC16 is a simple crc-ccitt: crc16 = ~crc_ccitt(0xffff, (uint8_t *)start_of_file + 1, file_length - 3); or, if you want to check the CRC on the whole bitstream: assert( crc_ccitt(0xffff, (uint8_t *)start_of_file + 1, file_length - 1) == 0xF0B8). (this is _not_ endian-clean -- but you get the point, on x86...) Can you confirm this is what you used on yours ? JBArticle: 112281
hi where could i find the designs from Ken Chapman? thanks urbanArticle: 112282
<u_stadler@yahoo.de> schrieb im Newsbeitrag news:1163951169.089681.187730@m73g2000cwd.googlegroups.com... > hi > > where could i find the designs from Ken Chapman? > > thanks > urban > www.xilinx.com s3e starterkit refdesigns :) or if that is too complicated click: http://www.xilinx.com/products/boards/s3estarter/reference_designs.htm note the coutner design doesnt compile on 8.2 you need to add lots of ports to the DCM_SPAR3_TEST primitive to pass it on 8.2 tools, but when fixed it works as described AnttiArticle: 112283
I spoke too fast, you need to checksum the whole file: crc16 = ~crc_ccitt(0xffff, (uint8_t *)start_of_file, file_length - 2); assert( crc_ccitt(0xffff, (uint8_t *)start_of_file, file_length) == 0xF0B8). JBArticle: 112284
On 2006-11-19 08:58:20 -0700, "jbnote" <jbnote@gmail.com> said: > I spoke too fast, you need to checksum the whole file: > > crc16 = ~crc_ccitt(0xffff, (uint8_t *)start_of_file, file_length - 2); > > assert( crc_ccitt(0xffff, (uint8_t *)start_of_file, file_length) == > 0xF0B8). > > JB I'm sorry if I misled anyone, but no, I was not calculating the checksum, only reporting it. As I said, I need to figure out the data section format before any of this of any use. I see patterns, but so far, my data alludes me. DaRArticle: 112285
Hello, I'm trying to access the DDR SDRAM of a Virtex-4 ML403 Evaluation board (XC4VFX12-10). I downloaded the reference implementation from the Xilinx Homepage, removed the ICON and ILA parts and synthesized it. The upload to the FPGA worked without problems and it accessed the DDR RAM (at least the DDR RAM heated up). After reading the reference design specification and the DDR RAM user guide the best point to start my own modifications and implement some primitive tests was the test_bench (mem_interface_top_test_bench_0.vhd). I implemented a simple state machine controlled via the board's switches to understand the process step by step (precharge, write, read) [1]. Of course it didn't work, the basic problem is that the state machine does not switch the state. (If I synthesize just the state machine without the DDR RAM interface the state machine works fine.) Is there a basic error in my approach to access the DDR RAM (in the sense of a wrong module as start point)? Does anyone have a working example how to access the DDR RAM of a Virtex-4 Board using the reference implementation? Thanks in advance, Elmar Weber [1] The code of the state machine with which I replaced the test bench is rather long, if anyone wants to take a look I can post it seperately.Article: 112286
"Antti" <Antti.Lukats@xilant.com> wrote in message news:1163888792.295560.43610@k70g2000cwa.googlegroups.com... > > two blocks only work in case of 128MB space that is made of 2 times 64kb > but if i have say 32kb + 8kb then the second one is not working :( IIRC it only works when the blocks are of the same size.... So, if you want 40K, you might need to make it out of 5 8K blocks... /MikhailArticle: 112287
They can't back off on the religon - as you note - they think we are dumb. It is very hard to get ASIC companies to provide a guaranteed delay element, let alone FPGA companies. There are many applications that need fixed delay elements. I even remember when a delay element was the standard way for DRAM timing to be generated. One way to add an unoptomized buffer is to allocate 2 input pins of the FPGA. For goodness sakes we have 8-billion pins these days. Externally pull one high and one low. Then just AND or OR your logic as needed with these input pins. Since they are static their routing delays are minimal (and desired). In the end, if they aren't used, just call them "Spare FPGA Inputs" and everyone will think you thought ahead! Trevor > They should back off this religious devotion to fully synchronous > logic and give us a couple of dozen programmable true delay elements, > scattered about the chip. But they won't because it's not politically > correct, and because they figure that we're so dumb that we'd get into > trouble using them. > > > John >Article: 112288
"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> schreef in bericht news:1ejvl2d3j9tj96emac307rtjfr2l4o40q1@4ax.com... > On Sun, 19 Nov 2006 00:24:34 GMT, PeteS <peter.smith8380@ntlworld.com> > wrote: > >>John Larkin wrote: >>> On Sat, 18 Nov 2006 23:23:30 GMT, PeteS <peter.smith8380@ntlworld.com> >>> wrote: >>> >>>> When I do pure hardware I do not have to try and figure out what the >>>> hell was done to implement my statements. >>>> >>>> This was a major issue on a design I did about 4 years ago where I >>>> interfaced an upstream bus to the busses on 6 devices (with a lot of >>>> other stuff) and the synthesis / PAR etc kept optimising away certain >>>> things that were there to maintain the timing. The response I got was >>>> 'well, use pure synchronous design' but in this case it was simply not >>>> possible (am issue I am sure you'll understand). >>> >>> Yup, this *is* the real world. We recently had to do a clock-edge >>> deglitcher, using delay elements. It couldn't be synchronous, because >>> we were, well, deglitching the clock! Ditto stuff like charge-pump >>> phase detectors, where you really need exactly what you need, delays >>> and all. >>> >>> >>>> I once deliberately did a DeMorgan transform by hand because I did nto >>>> trust the tool to do it right. (Code available on request ;) ) >>>> >>>> Cheers >>>> >>>> PeteS >>> >>> >>> One thing you can do is add a pulldown to a pin (or ground it) and >>> call that signal ZERO or something. Then just OR it with things to >>> create new, buffered, delayed things. If you need more, run it through >>> a shift register and create ZERO1, ZERO2, etc. The compiler can't >>> optimize them out! >>> >>> So the FPGA software people ought to provide us an irreducible ZERO >>> without wasting a pin, or a buffer that stays a buffer always. >>> >>> So, is there a block of logic so complex that the compiler can't >>> figure out that it indeed will always output a zero? Maybe the MSB of >>> a thousand-year counter, but that wastes flops. Maybe some small but >>> clever state machine that always makes zero but is too tricky to be >>> optimized? >>> >>> John >>> >> >>As noted, I once did a DeMorgan transform by hand (simple one too) and >>it materially changed the compiled output. (For those who understand the >>transform, don't get upset at the comments; they were there for people >>who _didn't_). >> >>Here it is: >> >>****************************** >> >> else begin >> cs0 <= cs_l; // make a direct copy of the cs signal >> cs1 <= (cs0 | cs_l); // Note - this uses a DeMorgan transform >> // the formula really is this : cs1 <= !(!cs_l & !cs0) >> // rather than cs1 <= (!cs_l & !cs0) >> // Note the extra output inversion, which renders an >> // inversion unnecessary >> // DeMorgan's theorem is this >> // <y = !a & !b> === <!y = a | b> >> // i.e. invert all signals and change and to or and vice >> // versa. Note this works only on basic functions >> // ( &, |, ! ) (AND, OR, INVERT) >> // for a valid select, we need two consecutive samples of >> // cs_l low. Latch a low, then look at the latch and the >> // signal on the pin. If both are low, cs1 goes low. If >> // either are high (glitch, runt pulse) then cs1 stays high >> // We trigger on internal cs (cs1) going low >> // Why did I use it? Because it requires no inverters and >> // therefore saves me a gate delay by simply using a LUT >> // without inversion >> // >> // Of course, the tools *might* do this, but I can't >> // guarantee it, so I'll *****ing so it myself. >> end >>***************************************** >> >>Cheers >> >>PeteS > > They should back off this religious devotion to fully synchronous > logic and give us a couple of dozen programmable true delay elements, > scattered about the chip. But they won't because it's not politically > correct, and because they figure that we're so dumb that we'd get into > trouble using them. > > > John > Well, synchronous design is easier and more understandable then asynchronous. So the "specialists" tell the world synchronous design is the only way rather then confess they are unable to make reliable asynchronous ones. And let's be honest. A lot of old asynchronous designs are real nightmares of critical race conditions, ad hoc inserted RC delays, glitches and tricks. Good asynchronous engineering is rare and was rare even in the days synchronous design was not yet invented. (Practised may be the better word.) I ever was asked to "debug" a time critical circuit. Looking at the asynchronous circuit which failed only once a week or so and reading the full page of explanations about the inner workings I discarded the whole design, doubled the clock frequency and made a rock solid synchronous circuit. They were almost disappointed it was that "eas˙". I could have made disciples for "synchronous only" but I'm not a true believer myself. Times they are a changing. I heard rumours asynchronous design has a revival as it can be used to build faster circuits. There should be special software for it, meant to design processors and the like. It's those software that makes me shiver. When the first micros hit the market, an engineer complained that those programmers used lots of extra space. He would have been fired by using only one percent of that number in extra gates. (Nothing to do with old uncle Billy :) Things did not grow better ever since but the real bad development is the habit of those programmers to think and make decisions for me. I really hate that. Almost all of these programmers have a theoretical background in "software engineering". Ever visited such a college with lots of computers, tens of masters and hundreds of students and no one could handle a soldering iron. There was none in the whole place. An assistant brought his own one from home when he needs to repair a cable. None of those "engineers" ever wrote bugfree software, but expect hardware designs to be it. Nevertheless, if something goes wrong, it's inevitable a hardware failure. I ever had technicians to exchange hardware seven times before they stopped to deny it to be a bug in the software. Another time one of my collegues disassembled a floppy driver to prove it did not met the drive specifications. (It could not format a floppy and they insisted we used the wrong brand of floppy although we tried every brand we could lay our hands on.) They complained about the disassembly rather then admit their own failure. Well, maybe we're doomed. Maybe I'm going to write software. What's worse? :) petrus bitbyterArticle: 112289
On Sun, 19 Nov 2006 10:30:53 -0800, "Trevor Coolidge" <tjc-sda@cox.com> wrote: >They can't back off on the religon - as you note - they think we are dumb. >It is very hard to get ASIC companies to provide a guaranteed delay element, >let alone FPGA companies. There are many applications that need fixed delay >elements. I even remember when a delay element was the standard way for >DRAM timing to be generated. > >One way to add an unoptomized buffer is to allocate 2 input pins of the >FPGA. For goodness sakes we have 8-billion pins these days. Externally >pull one high and one low. Then just AND or OR your logic as needed with >these input pins. Since they are static their routing delays are minimal >(and desired). In the end, if they aren't used, just call them "Spare FPGA >Inputs" and everyone will think you thought ahead! > >Trevor > > > > >> They should back off this religious devotion to fully synchronous >> logic and give us a couple of dozen programmable true delay elements, >> scattered about the chip. But they won't because it's not politically >> correct, and because they figure that we're so dumb that we'd get into >> trouble using them. >> >> >> John >> > What John is asking and what're asking are two different things. What you're asking is not doable at all, at least at a cost you'd be willing to pay. There are no "guaranteed delay elements" in ASICs. The delay variation across the whole PVT range is around 4 to 6 times. You may have programmable delay elements which need to be calibrated (and recalibrated as temperature and voltage change). I don't think the reason you don't get this off-the-shelf is political correctness but self-preservation. If you change processes as often as X & A are changing and you want the designs to run in the next chip, you have to use fully synchronous implementations. Absolute delays, with our without calibration, don't port to different processes too well.Article: 112290
On Sun, 19 Nov 2006 20:27:06 +0100, "petrus bitbyter" <pieterkraltlaatditweg@enditookhccnet.nl> wrote: > > > >"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> schreef in >bericht news:1ejvl2d3j9tj96emac307rtjfr2l4o40q1@4ax.com... [snip] >> >> They should back off this religious devotion to fully synchronous >> logic and give us a couple of dozen programmable true delay elements, >> scattered about the chip. But they won't because it's not politically >> correct, and because they figure that we're so dumb that we'd get into >> trouble using them. >> >> >> John >> > >Well, synchronous design is easier and more understandable then >asynchronous. [snip] >I even had technicians to exchange hardware seven times >before they stopped to deny it to be a bug in the software. Another time one >of my collegues disassembled a floppy driver to prove it did not met the >drive specifications. (It could not format a floppy and they insisted we >used the wrong brand of floppy although we tried every brand we could lay >our hands on.) They complained about the disassembly rather then admit their >own failure. > >Well, maybe we're doomed. Maybe I'm going to write software. What's worse? >:) > >petrus bitbyter > I'm thinking about it... can't you do all analog functions with a uP now anyway ?:-) ...Jim Thompson -- | James E.Thompson, P.E. | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona Voice:(480)460-2350 | | | E-mail Address at Website Fax:(480)460-2142 | Brass Rat | | http://www.analog-innovations.com | 1962 | I love to cook with wine. Sometimes I even put it in the food.Article: 112291
"Jim Thompson" <To-Email-Use-The-Envelope-Icon@My-Web-Site.com> wrote in message news:o5c1m25kq02cu2vkomgqells4v1gse1ut8@4ax.com... > On Sun, 19 Nov 2006 20:27:06 +0100, "petrus bitbyter" > <pieterkraltlaatditweg@enditookhccnet.nl> wrote: > >> >> >> >>"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> schreef in >>bericht news:1ejvl2d3j9tj96emac307rtjfr2l4o40q1@4ax.com... > [snip] >>> >>> They should back off this religious devotion to fully synchronous >>> logic and give us a couple of dozen programmable true delay elements, >>> scattered about the chip. But they won't because it's not politically >>> correct, and because they figure that we're so dumb that we'd get into >>> trouble using them. >>> >>> >>> John >>> >> >>Well, synchronous design is easier and more understandable then >>asynchronous. > [snip] >>I even had technicians to exchange hardware seven times >>before they stopped to deny it to be a bug in the software. Another time >>one >>of my collegues disassembled a floppy driver to prove it did not met the >>drive specifications. (It could not format a floppy and they insisted we >>used the wrong brand of floppy although we tried every brand we could lay >>our hands on.) They complained about the disassembly rather then admit >>their >>own failure. >> >>Well, maybe we're doomed. Maybe I'm going to write software. What's worse? >>:) >> >>petrus bitbyter >> > > I'm thinking about it... can't you do all analog functions with a uP > now anyway ?:-) > > ...Jim Thompson > -- You're some sort of smug bastard on the quiet. Obviously they can, it's just that your crappy analog electronics isn't fast enough to make it viable. Instead of sitting on the sidelines and sniping you should get your finger out. DNAArticle: 112292
mk wrote: > On Sun, 19 Nov 2006 10:30:53 -0800, "Trevor Coolidge" > <tjc-sda@cox.com> wrote: > > >>They can't back off on the religon - as you note - they think we are dumb. >>It is very hard to get ASIC companies to provide a guaranteed delay element, >>let alone FPGA companies. There are many applications that need fixed delay >>elements. I even remember when a delay element was the standard way for >>DRAM timing to be generated. >> >>One way to add an unoptomized buffer is to allocate 2 input pins of the >>FPGA. For goodness sakes we have 8-billion pins these days. Externally >>pull one high and one low. Then just AND or OR your logic as needed with >>these input pins. Since they are static their routing delays are minimal >>(and desired). In the end, if they aren't used, just call them "Spare FPGA >>Inputs" and everyone will think you thought ahead! >> >>Trevor >> >> >> >> >> >>>They should back off this religious devotion to fully synchronous >>>logic and give us a couple of dozen programmable true delay elements, >>>scattered about the chip. But they won't because it's not politically >>>correct, and because they figure that we're so dumb that we'd get into >>>trouble using them. >>> >>> >>>John >>> >> > > What John is asking and what're asking are two different things. What > you're asking is not doable at all, at least at a cost you'd be > willing to pay. There are no "guaranteed delay elements" in ASICs. The > delay variation across the whole PVT range is around 4 to 6 times. You > may have programmable delay elements which need to be calibrated (and > recalibrated as temperature and voltage change). I don't think the > reason you don't get this off-the-shelf is political correctness but > self-preservation. If you change processes as often as X & A are > changing and you want the designs to run in the next chip, you have to > use fully synchronous implementations. Absolute delays, with our > without calibration, don't port to different processes too well. They do have delay elements, but they tend to be encapsulated. There are pin delay elements, and the multi-phase DLLs use calibrated delays (which is why they are granular). As part of their operation, the DLLs have to lock and maintain tracking of temp/process, and I've thought that one thing the vendors could do, is allow user access to that calibration/tap pointer register - but that is a niche market. -jgArticle: 112293
On Sun, 19 Nov 2006 20:27:06 +0100, "petrus bitbyter" <pieterkraltlaatditweg@enditookhccnet.nl> wrote: > > > >"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> schreef in >bericht news:1ejvl2d3j9tj96emac307rtjfr2l4o40q1@4ax.com... >> On Sun, 19 Nov 2006 00:24:34 GMT, PeteS <peter.smith8380@ntlworld.com> >> wrote: >> >>>John Larkin wrote: >>>> On Sat, 18 Nov 2006 23:23:30 GMT, PeteS <peter.smith8380@ntlworld.com> >>>> wrote: >>>> >>>>> When I do pure hardware I do not have to try and figure out what the >>>>> hell was done to implement my statements. >>>>> >>>>> This was a major issue on a design I did about 4 years ago where I >>>>> interfaced an upstream bus to the busses on 6 devices (with a lot of >>>>> other stuff) and the synthesis / PAR etc kept optimising away certain >>>>> things that were there to maintain the timing. The response I got was >>>>> 'well, use pure synchronous design' but in this case it was simply not >>>>> possible (am issue I am sure you'll understand). >>>> >>>> Yup, this *is* the real world. We recently had to do a clock-edge >>>> deglitcher, using delay elements. It couldn't be synchronous, because >>>> we were, well, deglitching the clock! Ditto stuff like charge-pump >>>> phase detectors, where you really need exactly what you need, delays >>>> and all. >>>> >>>> >>>>> I once deliberately did a DeMorgan transform by hand because I did nto >>>>> trust the tool to do it right. (Code available on request ;) ) >>>>> >>>>> Cheers >>>>> >>>>> PeteS >>>> >>>> >>>> One thing you can do is add a pulldown to a pin (or ground it) and >>>> call that signal ZERO or something. Then just OR it with things to >>>> create new, buffered, delayed things. If you need more, run it through >>>> a shift register and create ZERO1, ZERO2, etc. The compiler can't >>>> optimize them out! >>>> >>>> So the FPGA software people ought to provide us an irreducible ZERO >>>> without wasting a pin, or a buffer that stays a buffer always. >>>> >>>> So, is there a block of logic so complex that the compiler can't >>>> figure out that it indeed will always output a zero? Maybe the MSB of >>>> a thousand-year counter, but that wastes flops. Maybe some small but >>>> clever state machine that always makes zero but is too tricky to be >>>> optimized? >>>> >>>> John >>>> >>> >>>As noted, I once did a DeMorgan transform by hand (simple one too) and >>>it materially changed the compiled output. (For those who understand the >>>transform, don't get upset at the comments; they were there for people >>>who _didn't_). >>> >>>Here it is: >>> >>>****************************** >>> >>> else begin >>> cs0 <= cs_l; // make a direct copy of the cs signal >>> cs1 <= (cs0 | cs_l); // Note - this uses a DeMorgan transform >>> // the formula really is this : cs1 <= !(!cs_l & !cs0) >>> // rather than cs1 <= (!cs_l & !cs0) >>> // Note the extra output inversion, which renders an >>> // inversion unnecessary >>> // DeMorgan's theorem is this >>> // <y = !a & !b> === <!y = a | b> >>> // i.e. invert all signals and change and to or and vice >>> // versa. Note this works only on basic functions >>> // ( &, |, ! ) (AND, OR, INVERT) >>> // for a valid select, we need two consecutive samples of >>> // cs_l low. Latch a low, then look at the latch and the >>> // signal on the pin. If both are low, cs1 goes low. If >>> // either are high (glitch, runt pulse) then cs1 stays high >>> // We trigger on internal cs (cs1) going low >>> // Why did I use it? Because it requires no inverters and >>> // therefore saves me a gate delay by simply using a LUT >>> // without inversion >>> // >>> // Of course, the tools *might* do this, but I can't >>> // guarantee it, so I'll *****ing so it myself. >>> end >>>***************************************** >>> >>>Cheers >>> >>>PeteS >> >> They should back off this religious devotion to fully synchronous >> logic and give us a couple of dozen programmable true delay elements, >> scattered about the chip. But they won't because it's not politically >> correct, and because they figure that we're so dumb that we'd get into >> trouble using them. >> >> >> John >> > >Well, synchronous design is easier and more understandable then >asynchronous. So the "specialists" tell the world synchronous design is the >only way rather then confess they are unable to make reliable asynchronous >ones. And let's be honest. A lot of old asynchronous designs are real >nightmares of critical race conditions, ad hoc inserted RC delays, glitches >and tricks. Good asynchronous engineering is rare and was rare even in the >days synchronous design was not yet invented. (Practised may be the better >word.) There are some people doing serious (cpu-scale) async logic, but that's pretty thin-air stuff. What I meant was that, even in a properly synchronous design, there are times where a real, analog delay is a useful design element, and it sometimes can save a full clock worth of time... not all data paths need a full clock to settle. Plus, it would be nice to be able to dynamically tune data:clock relationships and such. I have one architecture that we use a lot that simply must have a real, unclocked one-shot (among other things, it stops and resets the clock oscillator!) so we have to go off-chip to a discrete r-c. >I ever was asked to "debug" a time critical circuit. Looking at the >asynchronous circuit which failed only once a week or so and reading the >full page of explanations about the inner workings I discarded the whole >design, doubled the clock frequency and made a rock solid synchronous >circuit. They were almost disappointed it was that "eas˙". I could have made >disciples for "synchronous only" but I'm not a true believer myself. >Times they are a changing. I heard rumours asynchronous design has a revival >as it can be used to build faster circuits. There should be special software >for it, meant to design processors and the like. It's those software that >makes me shiver. When the first micros hit the market, an engineer >complained that those programmers used lots of extra space. He would have >been fired by using only one percent of that number in extra gates. (Nothing >to do with old uncle Billy :) Things did not grow better ever since but the >real bad development is the habit of those programmers to think and make >decisions for me. I really hate that. Almost all of these programmers have a >theoretical background in "software engineering". Ever visited such a >college with lots of computers, tens of masters and hundreds of students and >no one could handle a soldering iron. There was none in the whole place. I toured the Cornell EE department recently. 95% of the screens are on PC's, maybe 5% are oscilloscopes. No soldering irons in sight, just those drecky white plastic breadboards. Tulane, my alma mater, used Katrina as an excuse for eliminating the EE department entirely. I guess those labs are too expensive, compared to one TA teaching 400 kids art history in a lecture hall, all at once. > An >assistant brought his own one from home when he needs to repair a cable. >None of those "engineers" ever wrote bugfree software, but expect hardware >designs to be it. Nevertheless, if something goes wrong, it's inevitable a >hardware failure. I ever had technicians to exchange hardware seven times >before they stopped to deny it to be a bug in the software. Another time one >of my collegues disassembled a floppy driver to prove it did not met the >drive specifications. (It could not format a floppy and they insisted we >used the wrong brand of floppy although we tried every brand we could lay >our hands on.) They complained about the disassembly rather then admit their >own failure. > Yeah, there is a lot of ad-hoc async stuff around, full of 555's and one-shots and race conditions and such. Disciplined sync design should be the starting point from which to cheat. >Well, maybe we're doomed. Maybe I'm going to write software. What's worse? >:) I'm stuck, all this month at least, rewriting a diastrous, 14,000 line mess (68K assembly) that took over a man-year of work to make this bad. Programming is OK, but after a couple of weeks of it I get depressed... too much like bookkeeping. I can design hardware forever. Gotta do the ghastly serial DAC driver next. The Xilinx bit-bang serial config thing worked like a charm, even blinking an LED during the loop. JohnArticle: 112294
>Well, synchronous design is easier and more understandable then >asynchronous. So the "specialists" tell the world synchronous design is the >only way rather then confess they are unable to make reliable asynchronous There's rarely "the only" way. However alternatives may be hard ;) >real bad development is the habit of those programmers to think and make >decisions for me. I really hate that. If you have access to the source those issues are easier to workaround.. >Almost all of these programmers have a >theoretical background in "software engineering". Ever visited such a >college with lots of computers, tens of masters and hundreds of students and >no one could handle a soldering iron. There was none in the whole place. An >assistant brought his own one from home when he needs to repair a cable. In many places students get by through pure theory, and occasional lab. Access to labs are scarce outside the formalised sphere. Which will deter many persons from trying on their own. Theoretical tests are premiated substantially, and that is what you get.. I heard an interesting comment once, students are not to be trusted with untyped languages because they can't handle it. STILL same persons are expected to handle advanced math with pen & paper.. Maybe they expect correct math and spaghetti implementation? :-) >None of those "engineers" ever wrote bugfree software, but expect hardware >designs to be it. Nevertheless, if something goes wrong, it's inevitable a Maybe because failure to pay attention in hardware have such dire consequences directly? :), you can't undo/recompile the magic smoke..Article: 112295
>I'm stuck, all this month at least, rewriting a diastrous, 14,000 line >mess (68K assembly) that took over a man-year of work to make this >bad. Programming is OK, but after a couple of weeks of it I get >depressed... too much like bookkeeping. I can design hardware forever. Can't rewrite it in C or so .. ?Article: 112296
On Sat, 18 Nov 2006 12:24:16 -0800, the renowned John Larkin <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: >But hey, this is a revelation: > >Hardware design has always been comforting because it is direct, >simple, visible, wysiwyg, physical, and generally reliable. The tools, >oscilloscopes and such, are approachable and dependable. I can use a >30-year old tube-type TEK oscilloscope to debug the most modern analog >or digital circuits, without downloading and installing service packs. > >Software is abstract, indirect, bizarre, and unreliable. The tools are >buggy, bloated, always changing, unpredictable, pig slow, and seldom >backwards-compatible. I can't use current-gen tools to edit a 2-year >old FPGA design, and I'm lucky if I can somehow still find and run the >older tools. > >So, FPFAs, VHDL, and the associated software tools are the trojan >horse that's finally letting the software people get revenge, finally >allowing them to force us hardware designers depend on (and endlessly >pay for) their bizantine and unreliable methodologies, to trap us in >the gotta-upgrade-but-every-generation-has-more-new-bugs loop. > >And the new Windows-based scopes and logic analyzers, of course... >same idea. > >John How about programs such as Solidworks that are only one-way upward compatible. I.e. you can open a part or assembly file created in SW 2005 with SW 2006 but once you save it in 2006 it cannot ever be opened again in 2005 (or 2004, or 2003 etc.). Thus forcing everyone to upgrade... Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.comArticle: 112297
On 19 Nov 2006 21:04:18 GMT, pbdelete@spamnuke.ludd.luthdelete.se.invalid wrote: >>I'm stuck, all this month at least, rewriting a diastrous, 14,000 line >>mess (68K assembly) that took over a man-year of work to make this >>bad. Programming is OK, but after a couple of weeks of it I get >>depressed... too much like bookkeeping. I can design hardware forever. > >Can't rewrite it in C or so .. ? I'm c-phobic. It looks like monkeys pounding on typewriters to me. The 68K is a beautiful machine to program in absolute assembly, bare-metal, no libraries or linkers or anything. A typical embedded program will run 4k lines, 4-8 kbytes, and just work with very little, under 10%, debugging time. JohnArticle: 112298
"ZHI" <threeinchnail@gmail.com> wrote in message news:1163937858.531219.138760@h48g2000cwc.googlegroups.com... > Andrew Holme wrote: >> "ZHI" <threeinchnail@gmail.com> wrote in message >> news:1163783381.788279.19070@j44g2000cwa.googlegroups.com... >> >I implemented an algorithm into fpga board. I wrote the Matrix data to >> > FPGA board from Matlab and the result came back. I was using a loop to >> > execute the operation many times. It can operate normally for some >> > times, but in the end it stopped with the error in the line : >> > fwrite(sx,datafile,'int8'); >> > and the error messege showed : serial.fwrite A timeout occurred during >> > the write operation. How could it happen? How could it execute some >> > trials but happen time out error at some time? >> > >> >> Are you using handshaking for flow control? >I am using hardware for flow control. Maybe it timed-out because the receiver would not accept any more data.Article: 112299
John Larkin wrote: > > They should back off this religious devotion to fully synchronous > logic and give us a couple of dozen programmable true delay elements, > scattered about the chip. But they won't because it's not politically > correct, and because they figure that we're so dumb that we'd get into > trouble using them. > > > John > Virtex4 has them. They are the idelay elements, which give you 64 steps of delay with 75ps granularity.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z