Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
eric@telebit.com (Eric Smith) writes: >I've been thinking about trying to use Xilinx part with a parallel EPROM, >and having it hold my CPU in reset until it is configured. The configuration >loaded into the Xilinx would be designed to let the CPU access the EPROM for >its code. The code (and reset vector) obviously have to be in a part of the >EPROM not used by the Xilinx configuration. This would let me make my board >completely soft-configurable, particularly if I use EEPROM or Flash instead of >EPROM. > >Has anyone tried this sort of thing? > >Eric Here, at the Departement of Integrated Circuits at the Technical University in Braunschweig, a complete 32-bit-RISC-processor has been developed with a 5-stage-pipeline, 32 registers, interrupts, branch-target- and multi-purpose- cache etc. The design has been fabricated as a gate-array by LSI Logic afterwards. My part was to design a small and simple testboard for the RISC. The result was very similar to your idea: The circuit was built with only one FPGA Xilinx 3042-70, 4 32kx8 SRAM, one 64kx8 EPROM, some glue logic realized as a state-machine in one GAL 16V8 and two decoders (16V8 and 20V8), and a well-known TL7705-reset-generator. After reset, the state-machine holds the RISC-processor in reset-state; the outputs are tri-stated. The FPGA is reconfigured in master parallel high mode, the data comes from the EPROM. These 8 data bits are the configuration-data bits d[7..0] from the FPGA and the data bits d[31..24] from the RISC. After configurating, the 32-bit-RISC-processor can access the 8-bit-EPROM 32-bit-wide: in the first three cycles, the data is read from the EPROM and copied to the FPGA-outputs connected with d[23..0]; and in the fourth cycle the data goes directly to the RISC's d[31..24]. So the RISC can copy the complete data from the EPROM to the SRAM. After copying, the application is executed in the SRAM, which is much faster an can be accessed 32 bits wide. The FPGA is now useless and can be reconfigured by the RISC in slave-serial as a universal peripheral-controller, i.e. parallel-to-serial-converter or a LCD-interface. I use also the feature to configure the FPGA by a download-cable. So I can monitor the data- and address-bus and transfer it serially to a PC with a special configuration file. If there is any need, a hardware-breakpoint-comparator can be implemented into the FPGA as well. For microprocessor-prototyping, I made the best experiences by using a FPGA. I can change and improve the hardware without changing the layout. Gerrit. ----------------------- telkamp@eis.cs.tu-bs.deArticle: 426
In article <3a0jpj$8fa$1@mhade.production.compuserve.com> kerry@altera.com (Kerry Veenstra) <72712.1243@CompuServe.COM> writes: >MAX+PLUS II supports most functions in the OrCAD Library Files >pldgates.lib and ttl.lib. The file orcad.lmf in the MAX+PLUS II >installation directory lists the 342 OrCAD symbols supported >automatically. > >If you wish to use a function that is not mapped in orcad.lmf, >you must add a mapping. You can map OrCAD functions >to Altera-provided functions or to any design file created with >MAX+PLUS II. > >For more information, search for "OrCAD" in MAX+PLUS II on-line help, >and go to the topic "OrCAD Interface Information". I was thinking more in terms of the overall Altera part, not macrofunctions and such. In other words, I use the MAX+PLUS II schematic capture when I'm designing the part, but when I'm finished, I want to use the full 7000 or 8000 series chip in a board schematic that is created in Orcad. Dave ----------------------------------------------------- Dave Kingma 450 Canterbury St. Sr. Research Engineer Christiansburg, VA 24073 Fiber Optic Products Litton Poly-Scientific 1213 N. Main St. Blacksburg, VA 24060 USA e-mail: dkingma@vt.edu (703)-552-3011 x326 Amateur Radio: WA4RDIArticle: 427
Has anybody used FPGA as real time encryption device? The short propagation delay(8ns or less) and reprogrammability sounds like what makes a good hardware encryption device. Any comments? Ben -- Benjamin Gene Cheung Computer Engineer Georgia Institute of Technology Internet: gt0361b@prism.gatech.eduArticle: 428
!!! "It's not a BUG, jcooley@world.std.com /o o\ / it's a FEATURE!" (508) 429-4357 ( > ) \ - / _] [_ PLEASE HELP: I NEED YOUR OPINION! As many of you know, I have a part time professional hobby of trying to promote the EDA *user's* point of view on the net and in some of the industry trade publications. Just recently, I wrote the cover article to the Oct. 27th issue of EDN magazine titled "Shooting Down The True Lies in FPGA Benchmarking". (It came in a plastic bag and has a picture of Arnold Schwarzenegger on the cover. (I had to get special permission from 20th Century Fox to get & publish that photo!)) In it, I opened by explaining all the technical & political difficulties in benchmarking FPGA synthesis tools. I then backed it up by showing five different benchmarks that gave real benchmarking numbers & ranked products by name -- all with conflicting results. This article was a hot potato to work on because of the politics from both the synthesis & FPGA vendors. Now I'd like to ask: "Do you like articles that try to give actual numbers and raw user experiences concerning specific EDA products?" (Vendors love articles that publically praise their products but get very ugly at anything in print that even *hints* at where the warts are in what they sell -- reguardless of whether the warts are real or not!) In short, I'm asking those who've read the article: "Was this worth the hassle to write?" - John Cooley Part Time EDA Consumer Advocate Full Time Sheep & Goat Farmer P.S. In replying please don't just say "it was great" or "it sucked" -- but the reasons *why* you thought "it sucked"/"it was great". P.P.S. If you want to make an additional impact, pluck out the EDN service card and circle the interest numbers indicated on page 49. This hits home at EDN and their advertisers. =========================================================================== Trapped trying to figure out a Synopsys bug? Want to hear how 3114 other users dealt with it ? Then join the E-Mail Synopsys Users Group (ESNUG)! !!! "It's not a BUG, jcooley@world.std.com /o o\ / it's a FEATURE!" (508) 429-4357 ( > ) \ - / - John Cooley, EDA & ASIC Design Consultant in Synopsys, _] [_ Verilog, VHDL and numerous Design Methodologies. Holliston Poor Farm, P.O. Box 6222, Holliston, MA 01746-6222 Legal Disclaimer: "As always, anything said here is only opinion."Article: 429
Bob Elkind (bobe@soul.tv.tek.com) wrote: : We've recently designed two products which have solved the same problem : in two slightly different ways. : The first does exactly what you are suggesting: Use the LDC or HDC pin : to hold the uP in stasis until the Xilinx (4K) "boots up" from the common : ROM in Master Parallel mode. Then the uP awakens and runs normally. : This design works fine. : The second design allows the uP to be the "boss" of the ROM. It boots up : normally and, as part of the booting process, it programs two Xilinx 4K : devices. The Xilinx FPGAs are configured in Slave Peripheral Parallel : mode, they sit on the uP address and data bus much the same as the first : design example, above. However, by allowing the uP to configure the : FPGAs (they appear to be memory-mapped ports/devices), the uP can configure : and reconfigure the FPGAs at will. We've found this approach to be more : flexible. : Neither one of these variants is an outstanding technical accomplishment, : mind you, as they are more or less straight out of the data book or : app note texts. I mention them to confirm that both approaches work well. I'd like to add that in the first approach you can have the XILINX control essential stuff for the processor and in the second case the processor needs to boot all alone..... :-) Roger. -- * The Dutch taxdepartment announces: We will pay people back much more * * quickly. Starting in 1997 we will start transferring the funds back * * to the people around July instead of the current date in November. * EMail: R.E.Wolff@et.tudelft.nl ** Tel +31-15-783643 or +31-15-137459 ** <a href="http://einstein.et.tudelft.nl/~wolff/"> my own homepage </a> **Article: 430
In article <3a95m1$phr@acmez.gatech.edu>, gt0361b@prism.gatech.edu (Benjamin Gene Cheung) writes: |> Has anybody used FPGA as real time encryption device? |> |> The short propagation delay(8ns or less) and reprogrammability |> sounds like what makes a good hardware encryption device. |> |> Any comments? |> Encryption is an application area that is very well-suited for FPGAs. There are two reasons for this. First (and probably most important), encryption standards are still emerging and the flexibility of *SRAM*-based FPGAs gives them a real advantage over fixed-function devices such as ASICs. As encryption standards come and go, the FPGA can *cost-effectively* support a wide variety of encryption algorithms because of their reconfigurability. Second, encryption algorithms are a good match for the intrinsic operations of FPGAs (lookup tables, and routing). The permutation and logic operations of many encryption algorithms can be implemented fairly efficiently with FPGAs. -- Brad L. Hutchings (801) 378-2667 Assistant Professor Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602 Reconfigurable Logic LaboratoryArticle: 431
In article <3a95m1$phr@acmez.gatech.edu>, gt0361b@prism.gatech.edu (Benjamin Gene Cheung) writes: |> Has anybody used FPGA as real time encryption device? |> |> The short propagation delay(8ns or less) and reprogrammability |> sounds like what makes a good hardware encryption device. |> |> Any comments? |> I should also note that people have been using FPGAs to implement encryption/decryption ever since they (FPGAs) were introduced. They just don't talk about it very much :-). -- Brad L. Hutchings (801) 378-2667 Assistant Professor Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602 Reconfigurable Logic LaboratoryArticle: 432
In article <3a95m1$phr@acmez.gatech.edu> gt0361b@prism.gatech.edu (Benjamin Gene Cheung) writes: >Has anybody used FPGA as real time encryption device? > > The short propagation delay(8ns or less) and reprogrammability >sounds like what makes a good hardware encryption device. > > Any comments? > >Ben >-- >Benjamin Gene Cheung >Computer Engineer >Georgia Institute of Technology >Internet: gt0361b@prism.gatech.edu Wow, a question on the topic for which this news group was created :-) One of the first FPGA based compute accelerator boards designed was the Perle-0, demonstrated in 1988, and was a 5x5 array of 3020 chips, the smallest XC3K chips that Xilinx makes. One of the applications they implemented on the Perle-0 was RSA encription, which depends on very long multiplies. At the time that their results were published, Feb-1990, the fastest RSA encryptor described was a device that was a custom ASIC designed by ATT, which could encrypt at 19K bits per second, for 512 bit RSA key encryption. The Perle-0 doing the same 512 bit key encryption (built by DEC Paris Research Labs) clocked in at 200K bits per second, thus claiming the land speed record. More recently, the Perle-1 (a bigger and fancier version of their accelerator) achieved 600K bits per second, on 480 bit key encryption. All the best, Philip FreidinArticle: 433
In article <fliptronCzBLEG.GEv@netcom.com>, fliptron@netcom.com (Philip Freidin) writes: |> Wow, a question on the topic for which this news group was created :-) |> Agreed. :^) |> One of the first FPGA based compute accelerator boards designed was the |> Perle-0, demonstrated in 1988, and was a 5x5 array of 3020 chips, the |> smallest XC3K chips that Xilinx makes. One of the applications they |> implemented on the Perle-0 was RSA encription, which depends on very long |> multiplies. At the time that their results were published, Feb-1990, the |> fastest RSA encryptor described was a device that was a custom ASIC |> designed by ATT, which could encrypt at 19K bits per second, for 512 bit |> RSA key encryption. The Perle-0 doing the same 512 bit key encryption |> (built by DEC Paris Research Labs) clocked in at 200K bits per second, thus |> claiming the land speed record. More recently, the Perle-1 (a bigger and |> fancier version of their accelerator) achieved 600K bits per second, on |> 480 bit key encryption. It seems that RSA encryption must fit FPGAs fairly well. I am also aware of some DES applications for FPGAs. What other types of encryption schemes may make sense on FPGAs? -- Paul ------------------------------------------------------------------------------- Paul Graham Reconfigurable Logic Laboratory Brigham Young University - Electrical Engineering Department 424 Clyde Building Provo, Utah 84602 e-mail: grahamp@fpga.ee.byu.edu phone: (801) 378-7206Article: 434
In article <bTQJv*o30@wolf359.exile.org> Eric@wolf359.exile.org (Eric Edwards) writes: >Any leads? All kinds of places have 22V10's but what if I want if I want >a GAL26V12, a FlexLOGIC, or maybe an Aletra part? JDR, sales 800-538-5000. Catalog #41, most recent, lists Altera EP320, 600, 610, 900. Also ICT PEELs, lotsa Xilinx stuff, the usual small PALs and GALs. I'm not even a recent customer, just happy to see the place start carrying parts like this. If enough people order/ask for more, maybe they'll get some of the bigger stuff... -- -jeffB (Jeff Brandenburg, Va. Tech CS)Article: 435
: ---- : Eric Edwards: Bang= cg57.esnet.com!wolf359!eric Domain= eric@exile.org : Remember the home hobbyist computer: Born 1975, died April 29, 1994 ^ Which one was that? ------------' -- /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ Troy Fuqua Time Domain Systems Designing the finest hi-tech comm troy@iquest.com Huntsville, AL systems this side of the galaxy! \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/Article: 436
In article <3ac8s5$obk@polo.iquest.com>, Troy Fuqua <troy@vespucci.iquest.com> wrote: >: ---- >: Eric Edwards: Bang= cg57.esnet.com!wolf359!eric Domain= eric@exile.org >: Remember the home hobbyist computer: Born 1975, died April 29, 1994 > ^ >Which one was that? ------------' He must be referring to the Apple II series. The newest memeber of the family, the Apple IIgs, along with the other kids were orphaned by their parent (Apple Inc.) in 1994. But, they're all still very productive in their new homes. Mine, upgraded from 2.8 MHz to 15 MHz, is still running strong. The IIgs has the best of both the GUI of the Mac and the command-line of the PC and has built-in sound (Ensoniq sound processor) that rivals most of today's PC sounds cards and at 15 MHz it's about as snappy as my 486DX-50 PC.Article: 437
hutch@timp.ee.byu.edu (Brad Hutchings) writes: >Encryption is an application area that is very well-suited for FPGAs. >There are two reasons for this. First (and probably most >important), encryption standards are still emerging and the flexibility >of *SRAM*-based FPGAs gives them a real advantage over fixed-function >devices such as ASICs. As encryption standards come and go, the FPGA >can *cost-effectively* support a wide variety of encryption algorithms >because of their reconfigurability. Second, encryption algorithms >are a good match for the intrinsic operations of FPGAs (lookup >tables, and routing). The permutation and logic operations of >many encryption algorithms can be implemented fairly efficiently >with FPGAs. I think this may have been true 10 years ago, but most recent algorithms are designed to be efficient in software rather than hardware, and would be quite difficult to implement on an FPGA. Feistel/product ciphers (of which DES is one example) typically employ a great many rounds of repeated, almost-similar operations. DES, being designed for hardware implementation, isn't too hard to do. A more recent example such as MD5 (used in PGP and PEM) or SHA (FIPS 180, the Secure Hash Algorithm) use 64 or 80 rounds which have perhaps 10 ops (+, and, or, xor, rotate) on groups of 32-bit words per round, so you're looking at maybe 1000 32-bit ops for one pass through the algorithm. More complex ciphers like IDEA involve messier (from a hardware point of view) ops like multiplications. Other ciphers (the extreme side of the optimised-for-software approach) tend to use quite simple algorithms and ops (mainly xor) but large lookup tables. It'd be interesting to see if there's anything capable of implementing, say, MD5 or SHA. The basic building blocks are very simple to do in hardware or software, the problem is that there's so many of them that you tend to run out of room for the hardware approach. Peter.Article: 438
I'm making a copy of release 3.0 of Transmogrifier C available for anonymous ftp, on the host ftp.eecg.toronto.edu, in the directory /pub/software/tmcc. Transmogrifier C (or tmcc) is a compiler for a simple hardware description language. It takes a program written in a restricted subset of the C programming language, and produces a sequential circuit that implements the program in a Xilinx XC4000 series FPGA. The compiler compiles and runs under: SunOS 4.1.X Solaris 2.3 using gcc You will need your own copy of the Xilinx ppr and makebits programs. The versions we use came with the Xilinx XACT 4.31 package. The distribution includes the source for the tmcc compiler, a 9 page programming manual and a few sample circuits written in tmcc. WHO DID THIS ? The Transmogrifier 1 (TM-1) is a field-programmable system consisting of FPGAs, RAMs and programmable interconnect chips. It was designed and built at the Department of Electrical and Computer Engineering, University of Toronto, by Dave Karchmer under the supervision of Jonathan Rose, Paul Chow, David Lewis, and Dave Galloway. Transmogrifier C has been used to produce circuits that run on the TM-1. It was written by Dave Galloway. IS THIS USEFUL ? The compiler has only been used by a handful of people. It was used in the summer of 1994 to produce three similar LCD driving circuits as a test of the TM-1. Each circuit was about 800 lines of tmcc code, and fit into a single XC4010. The circuits work. Tmcc is not a replacement for VHDL or Verilog for people who are serious about producing a circuit design on time, and under budget. It will have bugs. It may be interesting to someone who does not know VHDL or Verilog, does know C, and wants to throw together a circuit in a couple of days to investigate its properties, or get an upper limit on the size or cycle time of a proposed design. BUG FIXES We are interested in getting feedback from people who try tmcc. If you find a problem, fix a bug, or improve tmcc, please send me mail at the address below. If you try it and you like it, please tell us that too. Unfortunately, we don't have a lot of time to work on tmcc so we can't promise anything in the way of support. Dave Galloway, University of Toronto, drg@eecg.toronto.eduArticle: 439
In article <3acpe6$3e1@net.auckland.ac.nz>, pgut1@cs.aukuni.ac.nz (Peter Gutmann) writes: |> hutch@timp.ee.byu.edu (Brad Hutchings) writes: |> |> >Encryption is an application area that is very well-suited for FPGAs. |> >There are two reasons for this. First (and probably most |> >important), encryption standards are still emerging and the flexibility |> >of *SRAM*-based FPGAs gives them a real advantage over fixed-function |> >devices such as ASICs. As encryption standards come and go, the FPGA |> >can *cost-effectively* support a wide variety of encryption algorithms |> >because of their reconfigurability. Second, encryption algorithms |> >are a good match for the intrinsic operations of FPGAs (lookup |> >tables, and routing). The permutation and logic operations of |> >many encryption algorithms can be implemented fairly efficiently |> >with FPGAs. |> |> I think this may have been true 10 years ago, but most recent algorithms are |> designed to be efficient in software rather than hardware, and would be |> quite difficult to implement on an FPGA. There will still be situations where special-purpose hardware may be necessary to meet some performance constraint. In those cases, reconfigurable FPGAs can provide advantages when compared to custom ASICS (which was my original point). I suppose that "efficient in software" means that the basic set of operations used in the algorithm closely match those usually provided in most instruction sets. Operations like arbitrary bit-level permutations would be avoided in favor of more standard operations such rotations, etc. Right? |> Feistel/product ciphers (of which |> DES is one example) typically employ a great many rounds of repeated, |> almost-similar operations. DES, being designed for hardware implementation, |> isn't too hard to do. A more recent example such as MD5 (used in PGP and |> PEM) or SHA (FIPS 180, the Secure Hash Algorithm) use 64 or 80 rounds which |> have perhaps 10 ops (+, and, or, xor, rotate) on groups of 32-bit words per |> round, so you're looking at maybe 1000 32-bit ops for one pass through the |> algorithm. Any of the logic operations (and, or, xor, rotate) can be efficiently implemented on most FPGAs. Large adds won't win any speed awards but speeds are still reasonable. I don't know the details of these algorithms but would it be possible to combine a series of logical operations into one complex logical operation, or does the algorithm preclude this? Combining operations could lead to a more efficient implementation on lookup-table-based FPGAs. |> More complex ciphers like IDEA involve messier (from a hardware |> point of view) ops like multiplications. Other ciphers (the extreme side of |> the optimised-for-software approach) tend to use quite simple algorithms and |> ops (mainly xor) but large lookup tables. Multiplication is a challenge, but not a show stopper. The DEC-Paris RSA implementation was the fasted known. If you really need a fast multipler, you could always use a fixed-function multipler chip. FPGAs are never used in a vacuum anyway; at the very least, they are usually augmented with memory and other fixed-function chips. How big of a lookup table do you need? Could it be implemented externally? |> |> It'd be interesting to see if there's anything capable of implementing, say, |> MD5 or SHA. The basic building blocks are very simple to do in hardware or |> software, the problem is that there's so many of them that you tend to run |> out of room for the hardware approach. I was assuming that performance was more of an issue than space (or cost). You can only get so much performance out of software. If you are already planning on implementing a hardware solution, you would use as many chips as you needed to achieve some performance target. In addition, the FPGAs provide the potential of accelerating a variety of algorithms due to their reconfigurability. As far as actual implementation goes, the RSA stuff that Philip mentioned in his post is the largest implmentation that I have read about. RSA uses very large multiplies. Was it optimized for hardware? I am unaware of other algorithms being implemented and it does not look like we will be hearing anymore from the Dec Paris people because DEC shut them down :-( . Has anybody heard anything about what happened to the lab and its personnel? -- Brad L. Hutchings (801) 378-2667 Assistant Professor Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602 Reconfigurable Logic LaboratoryArticle: 440
I am looking for the FPGA/PLD/ASIC development software for fairly small size of the application needs to have 2000-3000 gates. Does anybody give me an idea which company with devcice is good or bad ? Currentry, looking ViewLogic and Altera. Both sounds good but I still do not know about long range solution. The requirements are : Quick to getting start. Friendly operation. Device vendor indendency. (VHDL or Verilog suppotability) Good quality ( less bugs ). Reasonable cost (Initial cost from < $1000) etc. I would appreciate any of the comments, suggestions, or opinions. Regards, Takashi ******************************************************************************* * * Hewlett Packard * * Takashi Hidai * Optical Communications Div. * Phone: (408) 435-5829 * * * M/S 91UA * Telnet: 1-435-5829 * * Software Design * 370 W. Trimble Road * Fax: (1/408)-435-6286 * * Engineer * San Jose, CA 95131 * * * INTERNET: takashi_hidai@sj.hp.com or * * * takashi@dcssc.sj.hp.com * *******************************************************************************Article: 441
hutch@timp.ee.byu.edu (Brad Hutchings) writes: >In article <3acpe6$3e1@net.auckland.ac.nz>, pgut1@cs.aukuni.ac.nz (Peter Gutmann) writes: >|> I think this may have been true 10 years ago, but most recent algorithms are >|> designed to be efficient in software rather than hardware, and would be >|> quite difficult to implement on an FPGA. >I suppose that "efficient in software" means that the basic set >of operations used in the algorithm closely match those >usually provided in most instruction sets. Operations like arbitrary bit-level >permutations would be avoided in favor of more standard operations such >rotations, etc. Right? Yes - bit-level permutations a la DES, and things like lookups on small S-boxes are dropped in favour of opertions on data blocks which match the basic machine word size, usually 32 bits. >Any of the logic operations (and, or, xor, rotate) can be efficiently >implemented on most FPGAs. Large adds won't win any speed awards but >speeds are still reasonable. I don't know the details of these algorithms >but would it be possible to combine a series of logical operations into >one complex logical operation, or does the algorithm preclude this? >Combining operations could lead to a more efficient implementation on >lookup-table-based FPGAs. Here are the core operations from SHA (with MD5 being pretty identical). There are five 32-bit words which contain the data being processed, these are pushed through a long sequence of bit-scrambling operations. There is a central transformation (f-function) with 4 subtypes, each of which is used 1/4 of the time: f1( x, y, z ) -> ( ( x & y ) | ( ~x & z ) ) f2( x, y, z ) -> ( x ^ y ^ z ) f3( x, y, z ) -> ( ( x & y ) | ( x & z ) | ( y & z ) ) f4( x, y, z ) -> ( x ^ y ^ z ) The f-function is used in each round as follows (with a-e being the 5 32-bit words): a' = e + ROTL( 5, a ) + f( b, c, d ) + k + data b' = a c' = ROTL( 30, b ) d' = c e' = d The assignments can be accomplished through renaming, the rotates are fixed and can be hardwired, so the only real problem is the number of times all this needs to be repeated (either 64 or 80 times) taking up a fair bit of real estate. >|> More complex ciphers like IDEA involve messier (from a hardware >|> point of view) ops like multiplications. Other ciphers (the extreme side of >|> the optimised-for-software approach) tend to use quite simple algorithms and >|> ops (mainly xor) but large lookup tables. >Multiplication is a challenge, but not a show stopper. Here's the IDEA core: You have 4 16-bit subblocks a-d. Then you do the following: [1] = A * key1 [2] = B + key2 [3] = C + key3 [4] = D * key4 [5] = [1] ^ [3] [6] = [2] ^ [4] [7] = [5] * key5 [8] = [6] + [7] [9] = [8] * key6 [10] = [7] + [9] [11] = [1] ^ [9] [12] = [3] ^ [9] [13] = [2] ^ [10] [14] = [4] ^ [10] (there's a much easier-to-follow way to show that diagrammatically, but I'm not going to try it in ASCII :-). The above is repeated 8 times. SHA is currently implemented in the much-maligned Clipper (actually Capstone) chip, but using an ARM60 core to implement the algorithm in software, rather than a hardwired version. Hardware versions of IDEA with fairly good performance also exist. I don't know of any hardware versions of MD5/SHA (I won't count Capstone, since it's really a software implementation in disguise). >I was assuming that performance was more of an issue than space (or cost). You >can only get so much performance out of software. If you are already >planning on implementing a hardware solution, you would use as many >chips as you needed to achieve some performance target. In addition, >the FPGAs provide the potential of accelerating a variety of algorithms >due to their reconfigurability. But would you end up with something which the average user could drop into their machine to use? If you're going to pay a small fortune for a board packed with FPGA's, most people will simply buy a faster machine and do it in software (particularly if you can use it as a router or whatever at the same time - of course for very high speeds nothing but custom hardware will do, but even software implementations are capable of impressive performance. IBM's SEAL algorithm, for example (design goal: run as fast as possible on a 32-bit RISC CPU) runs at > 7Mb/sec on a 50MHz 486, and 4.5Mb/sec on an RS6000/530. You could therefore use a cheapie '486 box as say, an encrypting router for a 10Mbit/sec Ethernet link without going to custom hardware). That's getting a bit off topic I guess, I was just wondering whether the cost of custom hardware is justifiable for use by the masses. Peter.Article: 442
Looking for experts/consultants for VHDL & VIEWLOGIC FPGAArticle: 443
gt0361b@prism.gatech.edu (Benjamin Gene Cheung) writes: >Has anybody used FPGA as real time encryption device? > The short propagation delay(8ns or less) and reprogrammability >sounds like what makes a good hardware encryption device. > Any comments? Someone at our institute implemented FEAL and DES on a Xilinx Fpga. The speed was somewhere in the middle between the fastest software- implementations and the fastest hardware-implementations. The cost of a FPGA is considerably higher than commercially available encryption ICs. His conclusion: Only if you need a nonstandard encrytion method a FPGA is a good choice. Holger. -- Holger Hellmuth at Uni Karlsruhe <hellmuth@ira.uka.de>Article: 444
I think one of the most important feature that an FPGA offers as an encription device is reconfigurability. Most of the ASIC implementation can be eventually cracked because the algorithm is fixed(Clipper). If the encription is done by software using a very high speech specialized processor, that can be very costly to a say video conferencing system. If there is no dedicated processor for the encryption, then the encryption can take up too much of the main processor's time and therefore slows down the whole computer. The FPGA is the best fit because the algorithm can be updated as frequently as needed so that it would be virtually impossible to crack. Yet it's cheap enough to offer the hardware performance that is demanded by many applications. Any comments? Ben -- Benjamin Gene Cheung Computer Engineer Georgia Institute of Technology Internet: gt0361b@prism.gatech.eduArticle: 445
Which fpga boards are available for PCs, preferably PCI-Bus? Any experiences? Availability in Europe/Germany? Thanks --hermann haertig.Article: 446
Hi, I'm evaluating some FPGAs for use in a project, and I'm looking for "horror stories" having to do with the xilinx XC4000 series parts. I've heard some nasty rumors of difficulties with ground bounce, timing, etc., and I was looking for some firsthand accounts of problems with these beasties. Thanks in advance, rkg (Richard George)Article: 447
Atmel Corp. has a range of FPGAs that are architecturally optimised for compute intensive applications. The fine grained architecture is bassed on half adders and registers and the part is also reconfigurable (SRAM) down to the individual core cell level without disturbing the operation of the reset of the logic. There is software support for automatic generation of DSP building blocks such as adders, accumulators, multipliers, rotators, comparitors, RAMs and ROMs etc etc using our parameterized component generators. Most of the DSP functions can also (optionally) be automatically pipelined for very high performance. For more information e-mail martin@atmel.com with the subject : FPGA Lit. Req. Martin Mason FPGA Apps Eng. Atmel Corp. San JoseArticle: 448
In article <3aimv2$845@mailman.etecnw.com> rkg@etecnw.com writes: >Hi, > >I'm evaluating some FPGAs for use in a project, and I'm looking for "horror stories" >having to do with the xilinx XC4000 series parts. I've heard some nasty rumors of >difficulties with ground bounce, timing, etc., and I was looking for some firsthand >accounts of problems with these beasties. >(Richard George) Boy oh boy do I have a horror story! I was working in the lab and happened to not have my shoes on (its more comfortable). I dropped a XC4005PG156 on the floor, and didnt notice. Later I stood on it and it created a lovely array of holes in my foot on precision .1 inch centers. These devices do not suffer from ground bounce any more than any other high speed logic familly, and in the case of the XC4005H family, when the devices are in their CAP-TTL mode they have exceptionaly low ground bounce even with 16 adjacent pins switching into capacitive loads like 150 pF. This is well documented in the data book on page 8-6 thru 8-9. Looks like someone went to a LOT of effort to do this characterization. All the Xilinx chips have output slew rate control modes that let you trade off edge rate for ground bounce. A feature you dont get on 74xx240 style devices. There are some great scope "photos" on page 2-90 and 2-91 showing the comparison of the different output modes. You can use the over and under shoot as an indicator of the ground bounce for that operating mode. >From a timing point of view, as good design practice, I usually add a few nanosecond of margin to my time specifications for the place and route software. (the timespecs are a really neat feature). If my system is going to run at 20 MHz, I set the time spec to 45 to 47 nS, rather than 50nS. If I know the system will be in a hostile environment with elevated temperatures, I will probably add another nanosecond or two of safety margin (i.e. set the time spec to 42 or 43nS). All the best (My foot is healing nicely, thanks for your concern) Philip Freidin.Article: 449
Washington DC., Nov. 10, 1994 Virtual Computer Corporation was presented the first "Small Business Innovative Research Technology of the Year" Award for the revolutionary transformable computer platform called the Virtual Computer. The award was presented at the 1994 Technology Transfer Awards Dinner in Washington DC in conjunction with Technology 2004, Fifth National Technology Transfer Conference sponsored by NASA and the Technology Utilization Foundation. Speakers at the dinner included John R. Dailey, Deputy Administrator NASA, and Dr. John H. Gibbons, Assistant to the President for Science & Technology and Director of the White House Office of Science & Technology Policy. VCC received a Phase I and II from the Department of the Navy, Naval Surface Warfare Center to develop a completely reconfigurable hardware platform in 1987. "We're proud to have been apart of this SBIR program grant." said Vincent Schaper, SBIR Program Manager for the Department of the Navy. John Schewel, VP of Marketing & Sale, told the attendees at the conference, that any implementations on this transformable platform should quailify for the SBIR program of the government. SBIR grants are given to small business for innovative research ($50,000 to $850,000). We at VCC hope this award brings further attention to all work being done on reconfigurable computing.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z