Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Jul 4, 11:02=A0pm, "MM" <mb...@yahoo.com> wrote: > "rickman" <gnu...@gmail.com> wrote > > > > > For example, I can't find a way to reach a human at Twitter > > Why on earth a grown man would want to use Twitter? :) > > /Mikhail I'm not trying to *use* it, I want them to stop sending me junk email! Didn't you read the post? RickArticle: 141726
Hi, I have read the Device DNA in VHDL, but I have some problems. I have written code that certainly is not working well. If I put this code in the same fpga but in different projects, I read two different dna device ... :-| ... If someone has already written something ? Thanks very much. Kappa.Article: 141727
steve <steve@aol.com> wrote: >On Mon, 29 Jun 2009 23:08:16 +0800, rickman wrote >(in article ><68320efd-477b-4818-95dd-d4639d7e2cd1@n19g2000vba.googlegroups.com>): > >> On Jun 28, 10:52=A0am, "Antti.Luk...@googlemail.com" >> <Antti.Luk...@googlemail.com> wrote: >>> On Jun 28, 5:09=A0pm, cpld-fpga-asic <cpld.fpga.a...@gmail.com> wrote: >>> >>> >>> >>>> Group for People Involved In the Design and Verification of FPGA's, >>>> other Programmable Logic , and CPLD's to Exchange Idea's and >>>> Techniques. You should have FPGA / CPLD Design / Verification on your >>>> Profile. (The focus is more on FPGA/CPLD in the product as opposed to >>>> FPGA's solely as a path to an ASIC) VHDL / Verilog / ABLE / SystemC >>>> and other HDL's as well. Vendors included: Xilinx, Altera, Actel, >>>> Lattice, Atmel, QuickLogic, Tabula, Silicon Blue, Mentor, Cadence, >>>> Synopsys, Aldec, NI, Altium, and Many Others. >>> >>> could you describe the last technical FPGA related question >>> that your linkedin networking group solved? >>> >>> unless you are able todo that, i see you repeated postings >>> to c.a.f. as complete spam >>> >>> Antti >> >> Hi, I am one of the moderators at this group and I must be honest >> about it. It is not a very technically oriented group. I have tried >> to make some technically oriented posts there with few responses. >> out would be a mistake. >> >> So I have given up on this group as well as other FPGA related groups >> at LinkedIn. I have not removed myself from membership, but I can't >> say I recommend them unless you wish to use it for employment or self >> promotion. >> >> Rick > > I'm completely confused as to how you can have a FPGA group that is not >"technically orientated" , it would be like having a flower arranging class >without the flowers. LinkedIn is not about solving problems. It is about hiring the right people to solve problems. To get back to your example: you could find someone thru LinkedIn that can arrange the flowers for you. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... "If it doesn't fit, use a bigger hammer!" --------------------------------------------------------------Article: 141728
On Jul 5, 10:33=A0am, "Kappasm" <qag...@tin.it> wrote: > Hi, > > I have read the Device DNA in VHDL, but I have some problems. I have writ= ten > code that certainly is not working well. If I put this code in the same f= pga > but in different projects, I read two different dna device ... :-| ... > > If someone has already written something ? > > Thanks very much. > > Kappa. just find the problem it has, it DEFENETLY works and reads same dna on same chip no matter the way you use the readout I have written own code that reads same code as ken chapmans reader so i know it all just works AnttiArticle: 141729
Hi Antti, > I have written own code that reads same code as ken chapmans reader > so i know it all just works You could post code ? Thanks very much. Kappa.Article: 141730
On Jul 5, 3:29=A0pm, Kappa <secure...@gmail.com> wrote: > Hi Antti, > > > I have written own code that reads same code as ken chapmans reader > > so i know it all just works > > You could post code ? > > Thanks very much. > > Kappa. you dont ask the right questions ;) my system is customized microblazed based soc, and code is void DNA_Read() { int i; SetPin(DNA_CLK, 0); SetPin(DNA_SHIFT, 0); // Load DNA Value SetPin(DNA_READ, 1); PulsePin(DNA_CLK, 1); // Go shift mode SetPin(DNA_READ, 0); SetPin(DNA_SHIFT, 1); // 57 bits of DNA shift register to read out dna_high=3D0; for (i=3D0;i<(57-32);i++) { dna_high<<=3D1; dna_high |=3D GetPin(DNA_DOUT); PulsePin(DNA_CLK, 1); } for (i=3D0;i<(32);i++) { dna_low<<=3D1; dna_low |=3D GetPin(DNA_DOUT); PulsePin(DNA_CLK, 1); } } int main (void) { LCD_Init(); DNA_Read(); xil_printf("%08X%08X",dna_high,dna_low); return 0; } it doesnt help you i guess AnttiArticle: 141731
Hi Antti, > you dont ask the right questions ;) Corry for my not right questions ... :-) ... > my system is customized microblazed based soc, and code is Is what I tried to do .... In that way you have connected "dna_port" with pcore ? I had created a state machine in VHDL, with this I place a "device dna" in 64-bit register. I can connect my clock "64MHz" to "dna_port" clock ? > it doesnt help you i guess Important is to help Kappa.Article: 141732
As I was surfing I noticed a reference to the ARM6 booth algorithm. http://209.85.229.132/search?q=cache:M97u-qr5IGYJ:www.cl.cam.ac.uk/~acjf3/papers/mul.pdf+ARM7+MUL+signed+unsigned+multiply&cd=7&hl=en&ct=clnk&gl=uk In the appendix you get a C style algorithm. Whether you can copy this I don't know. There is a reference to getting it from ARM at the bottom. If this reduces the number of adders I will have a go at it. Not right now sorry. AndyArticle: 141733
On Jul 5, 10:14 am, Andy Botterill <a...@plymouth2.demon.co.uk> wrote: > As I was surfing I noticed a reference to the ARM6 booth algorithm.http://209.85.229.132/search?q=cache:M97u-qr5IGYJ:www.cl.cam.ac.uk/~a... > In the appendix you get a C style algorithm. Whether you can copy this I > don't know. There is a reference to getting it from ARM at the bottom. > If this reduces the number of adders I will have a go at it. Not right > now sorry. Andy Maybe it's just me, but why would someone write in pseudo code and not make the details very clear? Here is the line I am ranting about. while (not (rs = 0) or borrow) and n < wl do ... I suppose if you *assume* this pseudo code is like C, you can *assume* that the C rules for precedence apply. But I don't see enough similarity to C to make that assumption(e.g. "rd := if accumulate then rn else 0" is clearly *not* C). So I am left wondering just how to evaluate the conditional expression. On the other hand, if this expression is what I think it is, the addition of two more sets of parentheses would make it unambiguous. Heck, I gave up long ago trying to remember rules of precedence for the many languages I write in and *always* make my code unambiguous by adding as many parentheses as needed. Sometimes this can look unwieldy and verbose, but my intent is perfectly clear and no one will be looking at the code asking themselves if I meant what I wrote or if I made a mistake and meant something else. Maybe this hits a nerve with me because of being bitten by poorly written pseudo code in a research paper I had to read. The paper was about a tree search algorithm and if you couldn't understand the pseudo code *exactly*, then there was no point to the paper. Their pseudo code wasn't even this good and we had a devil of a time figuring it out. :^( On the other hand the other student I was working with on this was a very attractive coed and I enjoyed every minutes of working on it. :^) rickArticle: 141734
On Jul 5, 10:14=A0am, Andy Botterill <a...@plymouth2.demon.co.uk> wrote: > As I was surfing I noticed a reference to the ARM6 booth algorithm.http:/= /209.85.229.132/search?q=3Dcache:M97u-qr5IGYJ:www.cl.cam.ac.uk/~a... > In the appendix you get a C style algorithm. Whether you can copy this I > don't know. There is a reference to getting it from ARM at the bottom. > If this reduces the number of adders I will have a go at it. Not right > now sorry. Andy I've tried to weed my way through the pseudo code in the appendix and it seems to have some problems. First, I don't see where this is any different from the "modified" Booth's algorithm where the multiplier is shifted two bits at a time rather than just one. The result is that it requires more complex hardware for the calculation of the partial product, but calculates half as many. That brings us to two mistakes in the pseudo code. The first is that the loop control variable is n which is incremented by 1 on each iteration of the loop and ends the loop when n =3D wl (the number of input bits in the multiplier). So it would appear that the loop is being iterated for ***EACH*** bit in the multiplier while all the calculations are for iterating over each *PAIR* of bits in the multiplier. The other mistake is where they state that the output register can have just wl bits (the same as the number of input bits). The way the pseudo code is written the most significant bits of the product would be lost. I suppose this could be acceptable if you know this suits your data. But since they are designing a CPU for general purpose work, this seems like it would create problems. They do end the loop early if rs becomes zero indicating there are no more additions. But in the worst case it will require wl/2 iterations of the loop. RickArticle: 141735
rickman wrote: > On Jul 5, 10:14 am, Andy Botterill <a...@plymouth2.demon.co.uk> wrote: >> As I was surfing I noticed a reference to the ARM6 booth algorithm.http://209.85.229.132/search?q=cache:M97u-qr5IGYJ:www.cl.cam.ac.uk/~a... >> In the appendix you get a C style algorithm. Whether you can copy this I >> don't know. There is a reference to getting it from ARM at the bottom. >> If this reduces the number of adders I will have a go at it. Not right >> now sorry. Andy > > > Maybe it's just me, but why would someone write in pseudo code and not > make the details very clear? Here is the line I am ranting about. I would assume that the ARM6 was designed at gate level e.g RTL was not done. > > while (not (rs = 0) or borrow) and n < wl do ... Rs is one of the numbers to be multiplied. The algorithm terminates when Rs == 0. The code does look reasonable. However I have spent more time with C compared to Verilog. The key bits are the two case statements. > > I suppose if you *assume* this pseudo code is like C, you can *assume* > that the C rules for precedence apply. But I don't see enough > similarity to C to make that assumption(e.g. "rd := if accumulate then > rn else 0" is clearly *not* C). So I am left wondering just how to > evaluate the conditional expression. On the other hand, if this > expression is what I think it is, the addition of two more sets of > parentheses would make it unambiguous. > > Heck, I gave up long ago trying to remember rules of precedence for > the many languages I write in and *always* make my code unambiguous by > adding as many parentheses as needed. Sometimes this can look > unwieldy and verbose, but my intent is perfectly clear and no one will > be looking at the code asking themselves if I meant what I wrote or if > I made a mistake and meant something else. > > Maybe this hits a nerve with me because of being bitten by poorly > written pseudo code in a research paper I had to read. The paper was > about a tree search algorithm and if you couldn't understand the > pseudo code *exactly*, then there was no point to the paper. Their > pseudo code wasn't even this good and we had a devil of a time > figuring it out. :^( On the other hand the other student I was > working with on this was a very attractive coed and I enjoyed every > minutes of working on it. :^) Struggles to remember that long ago. When I were at university I don't there there were any girls doing electronics. > > rickArticle: 141736
On Jul 5, 4:31=A0pm, Kappa <secure...@gmail.com> wrote: > Hi Antti, > > > you dont ask the right questions ;) > > Corry for my not right questions ... :-) ... > > > my system is customized microblazed based soc, and code is > > Is what I tried to do .... > > In that way you have connected "dna_port" with pcore ? > > I had created a state machine in VHDL, with this I place a "device > dna" in 64-bit register. I can connect my clock "64MHz" to "dna_port" > clock ? > > > it doesnt help you i guess > > Important is to help > > Kappa. i think 64mhz is too high check the datasheet AnttiArticle: 141737
On Jul 5, 11:28=A0am, Andy Botterill <a...@plymouth2.demon.co.uk> wrote: > rickman wrote: > > On Jul 5, 10:14 am, Andy Botterill <a...@plymouth2.demon.co.uk> wrote: > >> As I was surfing I noticed a reference to the ARM6 booth algorithm.htt= p://209.85.229.132/search?q=3Dcache:M97u-qr5IGYJ:www.cl.cam.ac.uk/~a... > >> In the appendix you get a C style algorithm. Whether you can copy this= I > >> don't know. There is a reference to getting it from ARM at the bottom. > >> If this reduces the number of adders I will have a go at it. Not right > >> now sorry. Andy > > > Maybe it's just me, but why would someone write in pseudo code and not > > make the details very clear? =A0Here is the line I am ranting about. > > I would assume that the ARM6 was designed at gate level e.g RTL was not > done. I am sure this *is* done in RTL. There may be hand tweeking of some details, but for the most part ARM does not sell hard IP, or maybe I should say, few *buy* their hard IP. Atmel bought that once and ARM would not give them anything they could use to make any mods to the design. From then on Atmel has bought HDL and done their own implementations. > > while (not (rs =3D 0) or borrow) and n < wl do ... > > Rs is one of the numbers to be multiplied. The algorithm terminates when > Rs =3D=3D 0. > > The code does look reasonable. However I have spent more time with C > compared to Verilog. > > The key bits are the two case statements. I understand what it is saying, I was just ranting that for a paper to be written this way was very sloppy, IMHO. I think many people write papers that are not intended to be read. > > Maybe this hits a nerve with me because of being bitten by poorly > > written pseudo code in a research paper I had to read. =A0The paper was > > about a tree search algorithm and if you couldn't understand the > > pseudo code *exactly*, then there was no point to the paper. =A0Their > > pseudo code wasn't even this good and we had a devil of a time > > figuring it out. :^( =A0On the other hand the other student I was > > working with on this was a very attractive coed and I enjoyed every > > minutes of working on it. :^) > > Struggles to remember that long ago. When I were at university I don't > there there were any girls doing electronics. Yes, I lucked out in that regard. Girls were just starting to show up in engineering and my classes has two girls (one engaged and one very not engaged) and a bunch of geeks. I was the least geeky of the bunch... RickArticle: 141738
rickman <gnuarm@gmail.com> wrote: (snip, someone wrote) <> http://sunnyeves.blogspot.com/ < That calculation looks complex, but isn't it really just < int(n) = int(n-1) + 0.5 * (f(n) - f(n-1)) < The 0.5 times the difference assumes that the function is a straight < line between the two end points and when added to the min value is < just the average of the two points. This is an approximation, but < depending you your needs will be adequate. < I would argue that for an arbitrary function, there is no advantage to < using the average of each two points over just summing the points. < Consider points 0 to N where N is a large number. < N < < f(n) = f(1) + f(2) + ... + f(N) < < < 1 < N < < avg(f(n),f(n-1) = 0.5 f(0) + f(1) + f(2) + ... + 0.5 * f(N) < < < 1 < Notice that the only difference is that the average needs an extra < input point to calculate the first average and that the two end points < of the summation are halved. Numerically the difference between the < two calculations is 0.5 * (f(n) - f(0)). It appears to me to be a < very minuscule error to just add all the points without the complexity < of averaging. I would bet that for any value of N, 256 or over, this < error in the integral is much less than the error you get by the < original straight line average approximation. Sure, but for small N it might be important. < In fact, whether you the average is correct or not depends on how you < picture the error formation. This is too complex to draw here, but if < you picture the sample as being centered in the region being < integrated by adding that value, then the error is only a function of < the second order components of f(x). To require an average < calculation you are assuming that the area being calculated for a < given point is the area *between* two points. There is a similar explanation of the uselessness of Simpson't rule in Numerical Recipes. Simpson't rule has 1/3's and 2/3's that look important. Using a similar argument, NR shows that it is fancy decoration for the effect at the ends. < I could explain this more fully, but it is very hard to do without a < drawing. -- glenArticle: 141739
rickman <gnuarm@gmail.com> wrote: < I understand perfectly your original calculations. They are not < complex to understand. However, they are much more complex than < required. That is why I made my post explaining how in the infinite < series, your approach approximates the simple sum of the input < values. The only difference is that your approach can only produce < (N-1) output values for N input values and as a consequence, the final < output will not include the area of one column because you are always < subtracting out the first one. < Your approach errs in the thinking that the area is defined by the < points "surrounding" the area. What it is ignoring that the points < are *included* in the area. I suppose that initial int(0) could be < accounting for this somehow, without specifying how that < initialization is to be done. This is confusing. The points have zero width and integrate to zero. Left out is the actual limit for the integral. If it includes one half a sample period before the first point, and one half after the last point, then just add. If it doesn't, then you need to divide the first and last point by two. < I made a typo in my original equation which is equivalent to your more < complex calculation. < int(n) = int(n-1) + 0.5 * (f(n) - f(n-1)) < should have been < int(n) = int(n-1) + 0.5 * (f(n) + f(n-1)) < If you need me to, I can show you in a step wise manner how this is < equivalent to your equation, < int(n) = int(n-1) + 0.5 * diff(f(n), f(n-1)) + min(f(n), f(n-1)) < Other than the end points of a series, your equation adds in half of < each data point at two separate times. So each point is summed to < produce the integral. Your calculation simply omits half of each end < point. < The mistake that is often made when dealing with discrete time samples < is thinking that they are the same as the instantaneous values of a < continuous function. In reality they are already integrals of the < amplitude and the sample period (1/f). That is why you only need to < sum them to obtain the integral over a series. For the sampling theorem to work they must be samples of the instantaneous value. Otherwise the result is filtered and the results will be wrong when used as filter input (or almost any other use.) < A simple sum is the correct way to calculate the integral of a < discrete time data series. If you can manage the sample points in the center of the bins, yes. Certainly that is preferred, but not always possible. -- glenArticle: 141740
On Jul 2, 10:07 am, Kim Enkovaara <kim.enkova...@iki.fi> wrote: > Sharanbr wrote: > > How do I ensure that I have got a fairly accurate pinout (assuming > > meeting timing is not an issue) when > > the design is still under development. Do people create dummy FPGA top > > and use special tools that only > > check the validity of the pinouts wrt to the selected device? > > The most simplistic method is just to create dummy toplevel design, > and do pinmapping to that file with the help of the tools and manual, > and run the basic DRC checks. > > In bigger designs that is not usually enough. If complex clocking or big > amount of special blocks (serdes for example) are used I at least > recommend a toplevel that has clocking structures and the special blocks > instantiated. That file can be then run with the pinmapping trough the > normal P&R flow to make sure that it is implementable. > > And for good PCB layout the pinmapping has to be loaded into PCB level > tools (Mentor I/O Designer etc.) and the new pinmapping has to be > verified again in the P&R flow after each modification. > > --Kim Thanks Everyone. Would like to know how many times it happens that the pinouts (defined early in the cycle) have to change after the p&r? It would assume this would depend on: 1) how aggressive the timings are 2) how accurate was the flow used to define early pinout (as Kim suggested having top level reset/clocking structures) Do you agree with the assesement?Article: 141741
Hi everybody! Is it possible to use chipscope as a USB protocol analyzer? Perhaps through some API of chipscope? By the way, is there any accessible API with chipscope in order to make data analysis, custom GUI etc ... ? Thanks in advance!Article: 141742
On Jul 6, 1:33 am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > rickman <gnu...@gmail.com> wrote: > > < I understand perfectly your original calculations. They are not > < complex to understand. However, they are much more complex than > < required. That is why I made my post explaining how in the infinite > < series, your approach approximates the simple sum of the input > < values. The only difference is that your approach can only produce > < (N-1) output values for N input values and as a consequence, the final > < output will not include the area of one column because you are always > < subtracting out the first one. > > < Your approach errs in the thinking that the area is defined by the > < points "surrounding" the area. What it is ignoring that the points > < are *included* in the area. I suppose that initial int(0) could be > < accounting for this somehow, without specifying how that > < initialization is to be done. > > This is confusing. The points have zero width and integrate > to zero. Left out is the actual limit for the integral. If it > includes one half a sample period before the first point, and > one half after the last point, then just add. If it doesn't, > then you need to divide the first and last point by two. > > < I made a typo in my original equation which is equivalent to your more > < complex calculation. > > < int(n) = int(n-1) + 0.5 * (f(n) - f(n-1)) > > < should have been > > < int(n) = int(n-1) + 0.5 * (f(n) + f(n-1)) > > < If you need me to, I can show you in a step wise manner how this is > < equivalent to your equation, > > < int(n) = int(n-1) + 0.5 * diff(f(n), f(n-1)) + min(f(n), f(n-1)) > > < Other than the end points of a series, your equation adds in half of > < each data point at two separate times. So each point is summed to > < produce the integral. Your calculation simply omits half of each end > < point. > > < The mistake that is often made when dealing with discrete time samples > < is thinking that they are the same as the instantaneous values of a > < continuous function. In reality they are already integrals of the > < amplitude and the sample period (1/f). That is why you only need to > < sum them to obtain the integral over a series. > > For the sampling theorem to work they must be samples of the > instantaneous value. Otherwise the result is filtered and > the results will be wrong when used as filter input (or almost > any other use.) > > < A simple sum is the correct way to calculate the integral of a > < discrete time data series. > > If you can manage the sample points in the center of the bins, yes. > Certainly that is preferred, but not always possible. > > -- glen Glen, What you have said is a self-contradiction. If there is no averaging at all (an impossibility - all measurements are made within a finite time window) and the point measures a value at an exact point it time, then there is no bin to be in the center of or at the edge of. They are just measurements of a point in time. What would define the bin? In fact, what *IS* a bin in this context? It is a matter of perspective which I believe is shown clearly in the calculations. As I showed above (or I tried to show) the two calculations performed on an infinite time series are exactly equivalent and when performed on a finite time series the only difference is that one method includes all the samples with a multiplier of 1 and the other uses a multiplier of 0.5 only on the end points. The distinction of looking at the "bins" as being around the sample or between the samples is a red-herring and have no relation to the real math of integrating a time sampled signal. | | | |--+--| | | |--+--| * | | |--+--| * | * | |--+--| * | * | * | | * | * | * | * | | * | * | * | * | t0 t1 t2 t3 3 4 5 6 In this example, the + indicate the value of the signal f(t) at that point in time. The integral calculation assumes that the value of f (t) between the time samples is a straight line. By treating the period as "1" and summing the values, you are in effect calculating the areas shown above with the point centered in the rectangles. The area on the left and right of the point between the top of the rectangle and the straight line approximation have opposite signs since one is making the area larger than it should be and other is making is smaller. But they are equal in value and cancel. The other method draws rectangles between the points with a height equal to the average of each two points. This is a much more complex calculation and gives the same result. There is no reason to complicate the calculations using this calculation. Neither calculation has a basis in the physics of the problem. They are just equivalent numerical approximations. RickArticle: 141743
On Sun, 5 Jul 2009 16:23:01 +0800, Nico Coesel wrote (in article <4a506294.2904187875@news.planet.nl>): > steve <steve@aol.com> wrote: > >> On Mon, 29 Jun 2009 23:08:16 +0800, rickman wrote >> (in article >> <68320efd-477b-4818-95dd-d4639d7e2cd1@n19g2000vba.googlegroups.com>): >> >>> On Jun 28, 10:52=A0am, "Antti.Luk...@googlemail.com" >>> <Antti.Luk...@googlemail.com> wrote: >>>> On Jun 28, 5:09=A0pm, cpld-fpga-asic <cpld.fpga.a...@gmail.com> wrote: >>>> >>>> >>>> >>>>> Group for People Involved In the Design and Verification of FPGA's, >>>>> other Programmable Logic , and CPLD's to Exchange Idea's and >>>>> Techniques. You should have FPGA / CPLD Design / Verification on your >>>>> Profile. (The focus is more on FPGA/CPLD in the product as opposed to >>>>> FPGA's solely as a path to an ASIC) VHDL / Verilog / ABLE / SystemC >>>>> and other HDL's as well. Vendors included: Xilinx, Altera, Actel, >>>>> Lattice, Atmel, QuickLogic, Tabula, Silicon Blue, Mentor, Cadence, >>>>> Synopsys, Aldec, NI, Altium, and Many Others. >>>> >>>> could you describe the last technical FPGA related question >>>> that your linkedin networking group solved? >>>> >>>> unless you are able todo that, i see you repeated postings >>>> to c.a.f. as complete spam >>>> >>>> Antti >>> >>> Hi, I am one of the moderators at this group and I must be honest >>> about it. It is not a very technically oriented group. I have tried >>> to make some technically oriented posts there with few responses. >>> out would be a mistake. >>> >>> So I have given up on this group as well as other FPGA related groups >>> at LinkedIn. I have not removed myself from membership, but I can't >>> say I recommend them unless you wish to use it for employment or self >>> promotion. >>> >>> Rick >> >> I'm completely confused as to how you can have a FPGA group that is not >> "technically orientated" , it would be like having a flower arranging class >> without the flowers. > > LinkedIn is not about solving problems. It is about hiring the right > people to solve problems. To get back to your example: you could find > someone thru LinkedIn that can arrange the flowers for you. > > we are not talking about "linkedin" ,we are talking about a technical group within the "LI" network. If you cannot keep up, there is Lego and crayons over in the play area.Article: 141744
On Mon, 6 Jul 2009 21:02:42 +0800, steve <steve@aol.com> wrote: >we are not talking about "linkedin" ,we are talking about a technical group >within the "LI" network. >If you cannot keep up, there is Lego and crayons over in the play area. Ah, the New Media Priesthood. Their mantra: If you Don't Get It, we will patronize you to death. Me, I definitely Don't Get It, and I really don't care all that much, and I've been patronized in the past by people I respect a lot more than the Noo Meeja Vangelists, so I don't feel too worried. Is there a LinkedIn group for elderly curmudgeons? I'm in there... -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 141745
On Jul 6, 1:25=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > rickman <gnu...@gmail.com> wrote: > > (snip, someone wrote) > > <>http://sunnyeves.blogspot.com/ > > < That calculation looks complex, but isn't it really just > > < int(n) =3D int(n-1) + 0.5 * (f(n) - f(n-1)) > > < The 0.5 times the difference assumes that the function is a straight > < line between the two end points and when added to the min value is > < just the average of the two points. =A0This is an approximation, but > < depending you your needs will be adequate. > > < I would argue that for an arbitrary function, there is no advantage to > < using the average of each two points over just summing the points. > < Consider points 0 to N where N is a large number. > > < N > < < f(n) =3D f(1) + f(2) + ... + f(N) > < < > < 1 > > < N > < < avg(f(n),f(n-1) =3D 0.5 f(0) + f(1) + f(2) + ... + 0.5 * f(N) > < < > < 1 > > < Notice that the only difference is that the average needs an extra > < input point to calculate the first average and that the two end points > < of the summation are halved. =A0Numerically the difference between the > < two calculations is 0.5 * (f(n) - f(0)). =A0It appears to me to be a > < very minuscule error to just add all the points without the complexity > < of averaging. =A0I would bet that for any value of N, 256 or over, this > < error in the integral is much less than the error you get by the > < original straight line average approximation. > > Sure, but for small N it might be important. Actually, the difference is that for a finite series, one approximates the integral of the curve at N points and the other approximates the integral of N-1 points. This is easy to see if you consider the width of the area being approximated. For a constant value using two points, the simple sum is twice the value of a single point and three points gives a value half again as large as two points. Using the average method you can't calculate a value for one point and the value for three points is *twice* the value for two points. That's because the area being approximated using three points is only two periods wide, not three. Time sampled series are not continuous functions and can not be analyzed the same way. RickArticle: 141746
Does anyone know of anything similar to the Suzaku SZ030/SZ130? It's just about a perfect fit for a short production run product I'm helping a friend with. Perfect that is except the price. What we need is an FPGA as good or better than a XC3S1000, 1MB or more of RAM (SRAM or SDRAM) and 100MB Ethernet. It does not absolutely have to be a Spartan. An Altera Cyclone of some flavor would do if the price were right. We already have a lot of development done using Digilent Spartan boards. We don't need the Microblaze as we have a CPU from OpenCores that is adequate.Article: 141747
On Jul 6, 9:12=A0am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com> wrote: > On Mon, 6 Jul 2009 21:02:42 +0800, steve <st...@aol.com> wrote: > >we are not talking about "linkedin" =A0,we are talking about a technical= group > >within the =A0"LI" network. > >If you cannot keep up, there =A0is Lego and crayons over in the play are= a. > > Ah, the New Media Priesthood. =A0Their mantra: If you > Don't Get It, we will patronize you to death. > > Me, I definitely Don't Get It, and I really don't > care all that much, and I've been patronized in > the past by people I respect a lot more than the > Noo Meeja Vangelists, so I don't feel too worried. > > Is there a LinkedIn group for elderly curmudgeons? > I'm in there... > -- > Jonathan Bromley, Consultant I haven't found one, but you can start it! I'll probably join as well. What is it about getting old anyway??? RickArticle: 141748
On Jun 30, 4:24=A0pm, Svenn Are Bjerkem <svenn.bjer...@googlemail.com> wrote: > I'm wondering how other VHDL programmers solve their CSR bookkeeping, > or maybe there is a tool out there that I haven't found. It doesn't > need to be open source, if it is any good we will buy it. From the > efforts by Wilson Snyder on Vregs I take that this is not a tool that > I can write overnight myself. Maybe it is portable since it is written > in perl, but Verilog and VHDL does think differently, and I am not a > perl savvy. (Can't even read my own code after six weeks) I wrote TCL code compiled into an executable so you don't need TCL installed to run. There are two programs, reg2vhdl.exe and reg2groff.exe, the first generates VHDL and the second groff code. A snippet of the input text file: #--------------------------------------------------------------------------= ----- # Processor Interface Register File # pint_reg.reg #--------------------------------------------------------------------------= ----- #--------------------------------------------------------------------------= ----- # Revision Register #--------------------------------------------------------------------------= ----- NAME: REV ADDR: 0x0 DIR: R BITS: 7:0 Revision # This register stores the revision of the SPI module. #--------------------------------------------------------------------------= ----- # Control Register #--------------------------------------------------------------------------= ----- NAME: CONTROL_WR ADDR: 0x1 DIR: W BIT: 2 UartMode # 0 =3D Normal Operation (default). 1 =3D Configuration Mode. BITS: 1:0 LoopbackSel # Selects SPI loop back source. \ # SPI_0 : LoopSel<1:0> =3D 00 (default) \ # SPI_1 : LoopSel<1:0> =3D 01 \ # SPI_2 : LoopSel<1:0> =3D 10 \ # TEST : LoopSel<1:0> =3D 11 \ # Note: TEST is hardcoded to 0xA5. NAME: CONTROL_RD ADDR: 0x1 DIR: R BIT: 2 UartMode # 0 =3D Normal Operation (default). 1 =3D Configuration Mode. BITS: 1:0 LoopbackSel # SPI loop back source. \ # SPI_0 : LoopSel<1:0> =3D 00 (default) \ # SPI_1 : LoopSel<1:0> =3D 01 \ # SPI_2 : LoopSel<1:0> =3D 10 \ # TEST : LoopSel<1:0> =3D 11 \ # Note: TEST is hardcoded to 0xA5. The portion after the # character are comments and end up in a groff table (later converted to pdf). Documentation then becomes part of the makefile script. The program wasn't that difficult to write and it's getting lots of use in our design group. It's not a huge time saver, but does keep things in sync because the makefile generates both vhdl and documentation from the same source file. I went with groff as a documentation language because it's pretty easy to learn and is open source. PeteArticle: 141749
Hello everyone, I am trying to design a low pass decimator filter in MATLAB. I am supposed to decimate a signal sampled at 200MHz down to 10MHz. The signal bandwidth is 8 MHz and the signal spectra is centred at the sampling frequency. I began with the following code: --------------------------------------------------------------- Fs_adc = 200e6; % ADC Sampling Frequency Fpass1 = 4e6; % Passband Frequency Fstop1 = 5e6; % Stopband Frequency Astop1 = 80; % Aliasing Attenuation(dB) M1 = 20; % Decimation Factor TW1 = Fstop1 - Fpass1; % Transition Width (Hz) dlow1 = fdesign.decimator(M1, 'nyquist', M1, TW1, Astop1, Fs_adc); hlow1 = design(dlow1); % Lowpass prototype Hlpf1 = mfilt.firdecim(M1, hlow1.Numerator); fs_ds = dlow1.Fs_out; decimated_signal = filter(Hlpf1, sampled_signal); --------------------------------------------------------------- This gave me a Direct Form FIR Polyphase Decimator. Now when I look at the Hbpf1 variable, it shows me 1005 coefficients (Hbpf1.Numerator variable). Considering I have to implement this filter on FPGA, this appears to be a very huge number for the taps. A little reading on Polyphase FIR filters indicated me that the total number of taps *per every sub-filter* in a polyphase structure should be ceil(1005/M1) = 50 and so I would have M1=20 such filters. Would this mean that implementing the above filter in polyphase form has reduced the number of taps? I still have 1005 different coefficients in the polyphase structure. Or is it that since I would only be using 50 taps at a time, I only need 50 multipliers in the hardware which should be time-multiplexed with 20 sets of 50 coefficients, hence saving the hardware on a XILINX Virtex5 FPGA. Further the rate of time-multiplexing the filter coefficients is 10 MHz. Or if this is not the good way to decimate the signal while also achieving a good quality of output signal, kindly let me know other ideas. I have also tried CIC-Compensator-HalfBand combination but I really liked the response of the Polyphase FIR Decimator. Or should I try more than one stage of polyphase FIR Decimator (say, decimate by 5 followed by decimate by 4) to further reduce the number of taps? Regards, vizziee.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z