Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hello all! I'm new here and I don't know if it's good place for asking questions. May = I count on your help? ;) I have some experience with fpga but totaly zero with high speed memory. I = would like to design arbitrary wave generator and my project requires to us= e DDR3 memory. I have picked Cyclone 5 fpga 5CEFA2F23C7N which can handle D= DR3 memory up to 400MHz and I'm considering to use this chip AS4C64M16D3 (l= ink http://www.digikey.com/product-detail/en/AS4C64M16D3-12BCNTR/AS4C64M16D= 3-12BCNTR-ND/4965298 ) as LUT in my AWG but in datasheet there is said that= this is memory for 800MHz speed. My question is if it maches? Will this sp= ecific chip (and any other maybe) work with slower clock, and provide slowe= r data rate? =20 Regards PBArticle: 157076
I have found something... May anyone tell me if I interpret it correctly? There is filed in table 17 "Minimum Clock Cycle Time (DLL off mode)" equal 8 ns it gives 125MHz frequency so I'm able to take longer Clocl Cycle Time - 10ns and have 100MHz clock what gives 400MHz data rate in DDR3, doesn't it?Article: 157077
On 9/23/2014 11:36 AM, Piotr BÅ‚achnio wrote: > I have found something... May anyone tell me if I interpret it correctly? > There is filed in table 17 "Minimum Clock Cycle Time (DLL off mode)" equal 8 ns it gives 125MHz frequency so I'm able to take longer Clocl Cycle Time - 10ns and have 100MHz clock what gives 400MHz data rate in DDR3, doesn't it? I don't know for sure, but I have not seen a digital part that wouldn't work with a slower clock since I used the 8008 CPU chip. Even those had a minimum clock speed that was some 100 times slower than the max. If the part had a minimum frequency (maximum period) they would list that. Dynamic RAMs do have a max timing value for the refresh cycle. So even if you run the part slower, you have to maintain the refresh interval. -- RickArticle: 157078
rickman <gnuarm@gmail.com> wrote: > On 9/23/2014 11:36 AM, Piotr B??achnio wrote: >> I have found something... May anyone tell me if I interpret it correctly? >> There is filed in table 17 "Minimum Clock Cycle Time >> (DLL off mode)" equal 8 ns it gives 125MHz frequency so I'm >> able to take longer Clocl Cycle Time - 10ns and have 100MHz clock >> what gives 400MHz data rate in DDR3, doesn't it? > I don't know for sure, but I have not seen a digital part that wouldn't > work with a slower clock since I used the 8008 CPU chip. Even those had > a minimum clock speed that was some 100 times slower than the max. I haven't actually done DDR, but it is synchronous, and I believe that you do need to get the timing pretty close. Minimum clock speeds lasted longer than the 8008. At least to the 8080 (one advantage of the Z80 was that it didn't do that), and the 8086/8088. I believe the minimum for the original 8086 was about 2MHz or so. Newer processors might not have dynamic logic, but they have internal PLLs on the clock that won't lock much lower than the specified frequency. DDRs might have a PLL, too. > If the part had a minimum frequency (maximum period) they > would list that. > Dynamic RAMs do have a max timing value for the refresh cycle. > So even if you run the part slower, you have to maintain the >refresh interval. That, too. -- glenArticle: 157079
glen herrmannsfeldt wrote: > rickman <gnuarm@gmail.com> wrote: >> On 9/23/2014 11:36 AM, Piotr B??achnio wrote: >>> I have found something... May anyone tell me if I interpret it correctly? >>> There is filed in table 17 "Minimum Clock Cycle Time >>> (DLL off mode)" equal 8 ns it gives 125MHz frequency so I'm >>> able to take longer Clocl Cycle Time - 10ns and have 100MHz clock >>> what gives 400MHz data rate in DDR3, doesn't it? > >> I don't know for sure, but I have not seen a digital part that wouldn't >> work with a slower clock since I used the 8008 CPU chip. Even those had >> a minimum clock speed that was some 100 times slower than the max. > > I haven't actually done DDR, but it is synchronous, and I believe > that you do need to get the timing pretty close. > Yes. > Minimum clock speeds lasted longer than the 8008. At least to > the 8080 (one advantage of the Z80 was that it didn't do that), Very much so. > and the 8086/8088. I believe the minimum for the original 8086 > was about 2MHz or so. > I expect they'd actually go lower than that. Never tried, it, tho. > Newer processors might not have dynamic logic, but they have > internal PLLs on the clock that won't lock much lower than the > specified frequency. > > DDRs might have a PLL, too. > Generally more than one. >> If the part had a minimum frequency (maximum period) they >> would list that. > >> Dynamic RAMs do have a max timing value for the refresh cycle. >> So even if you run the part slower, you have to maintain the >> refresh interval. > > That, too. > > -- glen > -- Les CargillArticle: 157080
glen herrmannsfeldt wrote: > DDRs might have a PLL, too. They do. They use it to maintain the phase relationship between input clock, output data strobes and data outputs. For DDR2, the internal DLL is usually specified to a minimum frequency of 125Mhz. So, it might work below that, but it's not guaranteed to, meaning that could just stop working when a new die revision comes out or might not work at all when you use another manufacturer's drop-in replacement part; so it's not a good idea to rely on that. DDR2 SDRAM memory (I assume it's similar for DDR3) has a config option enabling you to turn that DLL off. Then you won't have any problems with the DLL not working properly at lower frequencies, but you have no known, fixed phase relationship between input clock, data strobe and data, so that makes the interface into your FPGA more complicated (you'd have to constantly re-calibrate somehow). Plus, some manufacturers say turning off the DLL is not a normal use case, so there's no guarantee it will work at all, and if it doesn't, you won't get any help from them. A few years back I did a design where memory bandwidth was not critical, so I thought I'd lower the memory interface clock rate to make meeting timing easier, save on power dissipation and such. I did some tests back then, and running the DRAM chips at < 125Mhz worked for some Micron parts, but not for some from Samsung. Later, it stopped working for some Micron parts as well after a new die revision came out. So, if the spec for the DLL in the DRAM chip states 125Mhz as a minimum frequency, don't expect enything else to work. As it turns out, lowering the frequency doesn't help a lot with regards to power dissipation, either. You still have the same data to write, meaning the number and rate of IOs toggling stays the same (you just have longer idle phases when you use a faster clock); refreshes and such are identical, so in average the power dissipation doesn't change significatnly when you lower the frequency. Power dissipation is depending mostly on the number of read/write accesses and the data patterns that occur. If you're not doing a lot of reads/writes, the clock frequency alone doesn't make much of a difference. Eventually, I ended up running the chips at 200Mhz, since back then the Xilinx DDR2 SDRAM controller wouldn't work with anything less (their calibration didn't work; they were doing phase shifts, and for 125Mhz one clock period was more than they could shift, so calibration aways failed). Since modern FPGAs shouldn't have any problems with running DDR SDRAM at 125MHz (timing-wise), I'd recommend not going below that. HTH, SeanArticle: 157081
Hi, what I'd do is pick an eval board with a working reference design and double-check that all required licenses are available (design tools, memory controller). Depending on the situation, this can be a non-issue or a total showstopper (i.e. not able to modify after 365 days without a new budget). I've got an example project for Numato Saturn and LPDRAM here: http://www.fpgarelated.com/comp.arch.fpga/thread/119590/zpu-based-soc-for-numato-saturn-board-with-dram.php This is not exactly what you asked for, but I'd use this myself if / when I'll run into a similar task. The data rate quoted in the post comes from the very slow CPU, not the RAM. --------------------------------------- Posted through http://www.FPGARelated.comArticle: 157082
T=DCV Rheinland has a workshop on functional safety for FPGAs: http://www.t= uvasi.com/de/trainings-und-workshops/workshops/programmierbare-elektronik-a= sics-fpgas-cplds-in-der-sicherheitstechnik Regards, Guy Eschemann guy@noasic.comArticle: 157083
Hi Amir, Please, can you put a complete kind of tutorial? Thank youArticle: 157084
Thanks all for response. PBArticle: 157085
These last few months, I have been slowly moving back to my main interests,= EDA tools (as a developer and as a user), FPGA application engineering, an= d last but not least processor design. After a 5-year hiatus I have started= revamping (and modernizing) my own environment, developed as an outcome of= my PhD work on application-specific instruction-set processors (ASIPs). Th= e flow was based on SUIF/Machine-SUIF (compiler), SALTO (assembly-level tra= nsformations) and ArchC (architecture description language for producing bi= nary tools and simulators). It was a highly-successful flow that allowed me= (along with my custom instruction generator YARDstick) to explore configur= ations and extensions of processors with seconds or minutes. I have been thinking about what's next. We have tools to assist the designe= r (the processor design engineer per se) to speedup his/her development. St= ill, the processor must be designed explicitly. What would go beyond the st= ate-of-the-art is not to have to design the golden model of the processor a= t all. What I am proposing is an application-specific processor synthesis tool tha= t goes beyond the state-of-the-art. A model generator for producing the hig= h-level description of the processor, based only on application analysis an= d user-defined constraints. And for the fun of it, let's codename it METATO= R, because I tend to watch too much Supernatural these days, and METATOR (m= essenger) is a possible meaning for METATRON, an angelic being from the Apo= crypha with a human past. So think of METATOR as an upgrade (spiritual or n= ot) to the current status of both academic and commercial ASIP design tools= . 1. The Context, the Problem and its Solution ASIPs are tuned for cost-effective execution of targeted application sets. = An ASIP design flow involves profiling, architecture exploration, generatio= n and selection of functionalities and synthesis of the corresponding hardw= are while enabling the user taking certain decisions. The state-of-the-art in ASIP synthesis includes commercial efforts from Syn= opsys which has accumulated three relevant portfolios: the ARC configurable= processor cores, Processor Designer (previously LISATek) and the IP Design= er nML-based tools (previously Target Compiler Technologies); ASIPmeister b= y ASIP Solutions (site down?), Lissom/CodAL by Codasip, and the academic TC= E and NISC toolsets. Apologies if I have missed any other ASIP technology p= rovider! The key differentiation point of METATOR against existing approaches is tha= t ASIP synthesis should not require the explicit definition of a processor = model by a human developer. The solution implies the development of a novel= scheme for the extraction of a common denominator architectural model from= a given set of user applications (accounting for high-level constraints an= d requirements) that are intended to be executed on the generated processor= by the means of graph similarity extraction. From this automatically gener= ated model, an RTL description, verification IP and a programming toolchain= would be produced as part of an automated targeting process, in like "meta= -": a generated model generating models!. 2. Conceptual ASIP Synthesis Flow METATOR would accept as input the so-called algorithmic soup (narrow set of= applications) and generate the ADL (Architecture Description Language) des= cription of the processor. My first aim would be for ArchC but this could a= lso expand to the dominant ADLs, LISA 2.0 and nML. METATOR would rely upon HercuLeS high-level synthesis technology and the YA= RDstick profiling and custom instruction generation environment. In the pas= t, YARDstick has been used for generating custom instructions (CIs) for Byo= RISC (Build Your Own RISC) soft-core processors. ByoRISC is a configurable = in-order RISC design, allowing the execution of multiple-input, multiple-ou= tput custom instructions and achieving higher performance than typical VLIW= architectures. CIs for ByoRISC where generated by YARDstick, which purpose= is to perform application analysis on targeted codes, identify application= hotspots, extract custom instructions and evaluate their potential impact = on code performance for ByoRISC. = 3. Conclusion To sum this up, METATOR is a mind experiment in ASIP synthesis technology. = It automatically generates a full-fledged processor and toolchain merely fr= om its usage intent, expressed as indicative targeted application sets. Best regards Nikolaos Kavvadias http://www.nkavvadias.comArticle: 157086
elemapprox is an ANSI C code, Verilog HDL and VHDL collection of modules (V= erilog HDL) and packages (VHDL) that provide the capability of evaluating a= nd plotting transcendental functions by evaluating them in single precision= . The original work supports ASCII plotting of a subset of the functions; t= his version provides a more complete list of functions in addition to bitma= p plotting for the transcendental functions as PBM (monochrome) image files= . elemapprox has been conceived as an extension to Prof. Mark G. Arnold's wor= k as puhlished in HDLCON 2001. Most functions have been prefixed with the l= etter k in order to avoid function name clashes in both ANSI C and Verilog = HDL implementations. Currently, the VHDL version uses unprefixed names (e.g= . acos instead of kacos). All code is licensed under the Modified BSD license and can be found here: http://github.com/nkkav/elemapproxArticle: 157087
I have started using the TI TUSB1210 which is a USB PHY with a ULPI interface. However, I can virtually guarantee that during enumeration, the device will lock up with DIR permanently DIR high in High Speed mode and seemingly with the terminating resistor enabled such that it keeps both D+ and D- low. I can make it happen quite reliably. I have sent a few messages on the relevant TI forum and despite promises the TI guys there haven't got back to me even when chased. Unless people here suggest I persist with this device, can anyone recommend an alternative USB PHY with a ULPI interface that has less unintended features? -- Mike Perkins Video Solutions Ltd www.videosolutions.ltd.ukArticle: 157088
Hi everyone, I apologize if this is maybe not the best audience for these kind of enquiries but I'll try anyhow. I'm looking for a good SystemC/TLM 2.0 training course which is not too basic and can give me a head start for a real life project. I'm not a black belt on C++ but I'm familiar with most of its concepts on top of C (which I use quite often instead). Since we have a budget for training in our company I'd like to make something useful out of it and given the current issues we are facing in architecting systems of increasingly complex features set, I believe that modeling would add value to our products and avoid many issues due to a wrong architecture. Any ideas/suggestions? p.s.: I've no problems to start some reading/testing by myself in order to fill the gap before attending the course. -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 157089
alb <al.basili@gmail.com> wrote: > I apologize if this is maybe not the best audience for these kind of > enquiries but I'll try anyhow. > I'm looking for a good SystemC/TLM 2.0 training course which is not too > basic and can give me a head start for a real life project. I know it isn't what you asked for, and it is my personal opinion, but I recommend that you learn and use Verilog or VHDL instead. > I'm not a black belt on C++ but I'm familiar with most of its concepts > on top of C (which I use quite often instead). This is the reason why. Hardware design is different from software. OK, when I started with Verilog some years ago now, it was suggested that for someone used to C, Verilog was a better choice than VHDL. Verilog has enough similarity to C to make it easy to learn, but not so much that you forget that you are designing hardware. You should be able to think in terms of wires and gates, even when writing continuous assignment statements and logic expressions. You should be able to look at a logic schematic diagram and write HDL statements from it. You should not think in terms of sequential logic and C programs. > Since we have a budget > for training in our company I'd like to make something useful out of it > and given the current issues we are facing in architecting systems of > increasingly complex features set, I believe that modeling would add > value to our products and avoid many issues due to a wrong architecture. I don't have any actual recommendations for training courses in any HDL, but I am sure that they exist for Verilog and VHDL. -- glenArticle: 157090
Hi Glen, glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: [] > I know it isn't what you asked for, and it is my personal opinion, > but I recommend that you learn and use Verilog or VHDL instead. I use VHDL since more than 10 years and even though I agree that learning to use it more efficiently is good, unfortunately is not top priority for the time being. >> I'm not a black belt on C++ but I'm familiar with most of its concepts >> on top of C (which I use quite often instead). > > This is the reason why. Hardware design is different from software. I do not want to do hardware design. I want to explore the type of architecture in order to give the go to the designers. On top of that an architecture may have several parameters (bus bandwidth, latencies, I/O throughput, etc.), therefore it is important to see 'easily' what is the impact on the architecture if we need to increase a value from A to B, will the overall design fall apart? Will the bandwidth be not sufficient, will it require too much memory? I've learned to simulate my vhdl designs thinking in terms of transactions and I believe I can 'model' my system in behavioral vhdl, but I'm not sure is the most efficient way. We have embedded processors for which a behavioral model would be hard to replicate, while several of them are available in SystemC. IMHO a modeling phase should not give the impression that we can generate the vhdl from the said model. The main idea is to verify that we are not going against a wall during the implementation phase because we didn't have enough margins. A functional model should not have the low level details, but the transaction should be representative of the amount of data flow foreseen. A simple example would be the amount of traffic on the on board bus, every transfer should consider the amount of time it takes for the handshake on the bus, with timing values that match with the intended platform. Ideally then, the model may cover more than just the FPGA functions and go up to board level and system level (more boards together). I couldn't care less if the board has all the necessary buffers to handle the speed we want, on the contrary I need to see, given a specified throughput what is the optimal bus size (8/16/32/64...). Power considerations may need to be added as well. > > OK, when I started with Verilog some years ago now, it was suggested > that for someone used to C, Verilog was a better choice than VHDL. > Verilog has enough similarity to C to make it easy to learn, but > not so much that you forget that you are designing hardware. I believe learning verilog is too low in my priority list... (and will soon fall off the list!). > You should be able to think in terms of wires and gates, even when > writing continuous assignment statements and logic expressions. I want to get out of the wires and gates level. At least for the phase I'm interested in. /Polluting/ the architecturing phase with such level of details is risky and will inevitably blur the big picture. [] > I don't have any actual recommendations for training courses in any > HDL, but I am sure that they exist for Verilog and VHDL. I know already a couple of courses for VHDL that I'm interested in, but unfortunately they are not in my current priority list (but they are still in the list!). AlArticle: 157091
Hi Al, On 09/10/2014 07:58, alb wrote: > Hi everyone, > > I apologize if this is maybe not the best audience for these kind of > enquiries but I'll try anyhow. > > I'm looking for a good SystemC/TLM 2.0 training course which is not too > basic and can give me a head start for a real life project. > > I'm not a black belt on C++ but I'm familiar with most of its concepts > on top of C (which I use quite often instead). Since we have a budget > for training in our company I'd like to make something useful out of it > and given the current issues we are facing in architecting systems of > increasingly complex features set, I believe that modeling would add > value to our products and avoid many issues due to a wrong architecture. > > Any ideas/suggestions? I would definitely recommend the Doulos course, I have done their Comprehensive SystemC one and it was very good. The only disappointment was that we ran out of time so SCV, TLM and Synthesis were not properly looked at. Good luck, Hans www.ht-lab.com > > p.s.: I've no problems to start some reading/testing by myself in order > to fill the gap before attending the course. >Article: 157092
Hi Hans, HT-Lab <hans64@htminuslab.com> wrote: [] >> I'm looking for a good SystemC/TLM 2.0 training course which is not too >> basic and can give me a head start for a real life project. [] > I would definitely recommend the Doulos course, I have done their > Comprehensive SystemC one and it was very good. The only disappointment > was that we ran out of time so SCV, TLM and Synthesis were not properly > looked at. I hoped you chimed in and knowing you recommend that course is a big reassurance. We are not so much interested in SCV or Synthesis but we are definitely interested in TLM. There's a possibility that more people are interested in our site and we suddenly break through the right number to have them on site. I believe if that happens we can have a much more tailored course based on our needs and background. AlArticle: 157093
On Wed, 08 Oct 2014 22:30:54 +0100, Mike Perkins wrote: > I have started using the TI TUSB1210 which is a USB PHY with a ULPI > interface. > > However, I can virtually guarantee that during enumeration, the device > will lock up with DIR permanently DIR high in High Speed mode and > seemingly with the terminating resistor enabled such that it keeps both > D+ and D- low. I can make it happen quite reliably. > > I have sent a few messages on the relevant TI forum and despite promises > the TI guys there haven't got back to me even when chased. > > Unless people here suggest I persist with this device, can anyone > recommend an alternative USB PHY with a ULPI interface that has less > unintended features? I've used the SMSC / Microchip USB3320 in a few designs. I have seen problems with the ULPI itself if the bus is allowed to float by the master, which can actually inject spurious register writes into the PHY. This can happen if the ULPI is on multipurpose pins on a microcontroller or FPGA that need to be programmed to be in ULPI mode. BTW, I found I couldn't use the TI TUSB1210 part in my designs due to power supply rail sequencing requirements between the 3.3V and 1.8V rail. There's a diode between VBAT and VDDIO. This wasn't mentioned on the TI datasheet, despite being a feature of the silicon! This may or may not be a problem for your design, depending on your power supply. Regards, AllanArticle: 157094
Hi alb, if I may ask; where (which country) are you located? Have you ever attended another Doulos seminar? What is your experience? Especially with respect to UVM. > Hi Hans, Hi Hans! Hope you are doing well. Best regards Nikolaos Kavvadias http://www.nkavvadias.com http://github.com/nkkav/elemapproxArticle: 157095
On 09/10/2014 11:29, Allan Herriman wrote: > On Wed, 08 Oct 2014 22:30:54 +0100, Mike Perkins wrote: > >> I have started using the TI TUSB1210 which is a USB PHY with a ULPI >> interface. >> >> However, I can virtually guarantee that during enumeration, the device >> will lock up with DIR permanently DIR high in High Speed mode and >> seemingly with the terminating resistor enabled such that it keeps both >> D+ and D- low. I can make it happen quite reliably. >> >> I have sent a few messages on the relevant TI forum and despite promises >> the TI guys there haven't got back to me even when chased. >> >> Unless people here suggest I persist with this device, can anyone >> recommend an alternative USB PHY with a ULPI interface that has less >> unintended features? > > > I've used the SMSC / Microchip USB3320 in a few designs. > > > I have seen problems with the ULPI itself if the bus is allowed to float > by the master, which can actually inject spurious register writes into > the PHY. > This can happen if the ULPI is on multipurpose pins on a microcontroller > or FPGA that need to be programmed to be in ULPI mode. > > > BTW, I found I couldn't use the TI TUSB1210 part in my designs due to > power supply rail sequencing requirements between the 3.3V and 1.8V rail. > > There's a diode between VBAT and VDDIO. This wasn't mentioned on the TI > datasheet, despite being a feature of the silicon! > > This may or may not be a problem for your design, depending on your power > supply. Many thanks for your insight. This is connected to a FPGA, where I have access and have control of all the relevant pins. The power sequencing hasn't been a problem, perhaps I'm lucky? Just applying reset to the PHY makes it come back to life! -- Mike Perkins Video Solutions Ltd www.videosolutions.ltd.ukArticle: 157096
Hi Nikolaos, Nikolaos Kavvadias <nikolaos.kavvadias@gmail.com> wrote: > if I may ask; where (which country) are you located? Have you ever > attended another Doulos seminar? What is your experience? Especially > with respect to UVM. I've followed several webinars but never a course. If I ever manage to convince my managers to send me to their courses I'll post a review, no worries! I live in Switzerland. AlArticle: 157097
On 09/10/2014 13:04, Nikolaos Kavvadias wrote: > Hi alb, > > if I may ask; where (which country) are you located? Have you ever attended another Doulos seminar? What is your experience? Especially with respect to UVM. > Like Al, I also watch some seminars/webex' on the UVM and it gave me a headache so this is clearly something a 5 day training course would be ideal for. >> Hi Hans, > > Hi Hans! Hope you are doing well. Hi Nikolaos, Yes thanks, I spend another great 2 weeks on one of your islands (Poros). Anybody who is fed up fighting FPGA, PCB, EDA tools I would highly recommend a quiet Greek island for a few weeks ;-) Regards, Hans. www.ht-lab.com > > Best regards > Nikolaos Kavvadias > http://www.nkavvadias.com > http://github.com/nkkav/elemapprox >Article: 157098
Dear Hans, > Hi Nikolaos, >=20 > Yes thanks, I spend another great 2 weeks on one of your islands=20 > (Poros).=20 Great! My brothers visited near-by sites this year: Hydra, Spetses (islands= ) and Nafplion (mainland). Hydra and Spetses are quite close to Poros (clos= est islands anyway). Historically speaking these tiny islands played a majo= r part in the Greek Independence War (1821-1828). I don't know much details; this band of brothers rarely meets in its entire= ty these days: maybe once a year or every two years or so. > Anybody who is fed up fighting FPGA, PCB, EDA tools I would=20 > highly recommend a quiet Greek island for a few weeks ;-) Nothing to add here ^_-- Best regards Nikolaos Kavvadias http://www.nkavvadias.com =20 > Regards, > Hans. >=20 > www.ht-lab.com >=20 >=20 >=20 >=20 >=20 > > >=20 > > Best regards >=20 > > Nikolaos Kavvadias >=20 > > http://www.nkavvadias.com >=20 > > http://github.com/nkkav/elemapprox >=20 > >Article: 157099
On 09/10/14 11:16, alb wrote: > Hi Hans, > > HT-Lab <hans64@htminuslab.com> wrote: > [] >>> I'm looking for a good SystemC/TLM 2.0 training course which is not too >>> basic and can give me a head start for a real life project. > [] > >> I would definitely recommend the Doulos course, I have done their >> Comprehensive SystemC one and it was very good. The only disappointment >> was that we ran out of time so SCV, TLM and Synthesis were not properly >> looked at. > > I hoped you chimed in and knowing you recommend that course is a big > reassurance. We are not so much interested in SCV or Synthesis but we > are definitely interested in TLM. > > There's a possibility that more people are interested in our site and we > suddenly break through the right number to have them on site. I believe > if that happens we can have a much more tailored course based on our > needs and background. > > Al > I used to work for Doulos, and teach SystemC and TML2 - we regularly ran courses at CERN, but normally VHDL and some Expert VHDL. Something makes me think you're based at CERN. Since Hans did the course, the SCV content was relegated to an appendix. The main SystemC course included an introduction to TLM2, then there was a separate TLM2 course. Knowing C++ is a definite advantage! Have a look at the Doulos website, or email info@doulos.com, regards Alan -- Alan Fitch
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z