21 posts
• Page **1** of **1**

Seems like neural networks would work alot better in a fpga than in a computer program.

- outer_space2
**Posts:**51**Joined:**Sun Oct 09, 2005 1:05 pm

Neural networks by nature fits well with a parallel architecture so FPGAs should be an efficient platform.

For large networks it does not look very good, it is much harder to develeop for FPGAs than for PCs and a modern PC is seriously fast. My PC can do 20 GFLOPS, to get that in a FPGA you need 100 single cycle multipliers if you are running at 200MHz.

My graphics card does 300 GFLOPS, then you need 1500 single cycle multipliers. That would be one expensive FPGA setup, far more expensive than a PC. It would probably use less power than the PC.

So much harder to make and more expensive but more efficient (if mass produced).

For small networks that fits completely on a cheap FPGA it makes a lot more sense. Very small networks would probably be more suited to run om an ARM microcontroller.

For large networks it does not look very good, it is much harder to develeop for FPGAs than for PCs and a modern PC is seriously fast. My PC can do 20 GFLOPS, to get that in a FPGA you need 100 single cycle multipliers if you are running at 200MHz.

My graphics card does 300 GFLOPS, then you need 1500 single cycle multipliers. That would be one expensive FPGA setup, far more expensive than a PC. It would probably use less power than the PC.

So much harder to make and more expensive but more efficient (if mass produced).

For small networks that fits completely on a cheap FPGA it makes a lot more sense. Very small networks would probably be more suited to run om an ARM microcontroller.

- Kristallo
**Posts:**203**Joined:**Mon Sep 20, 2004 3:25 am

I am not very agree with the thinking in GFLOPs because I believe that they are more related to processors than to FPGA custom circuits. It's my own opinion of course. First, you can use really fast multipliers at relatively high speeds (especially if using Virtex for example - even though it is very very expensive to me and unfortunately I have never touched one). Second, you can use Lookup tables for some floating point "things". And third, you have plenty of logic that can work really parallel in those same high speeds.

In my master thesis (a few years ago) I have developed a single artificial backpropagation neuron that fits perfectly on 50% of SpartanII with 15000 gates. And I'm interested in artificial neural networks ever since. Especially those that are built for FPGAs. Now I know that many optimizations can be made on that neuron. Unfortunately I have used the schematic in WebPack 4 or 5 and it is not very comfortable to work with that scheme. Maybe some day I will rewrite (and optimize) the code in Verilog...

I you have some additional e-information (solutions, comparisons, everything), I will be glad to hear.

In my master thesis (a few years ago) I have developed a single artificial backpropagation neuron that fits perfectly on 50% of SpartanII with 15000 gates. And I'm interested in artificial neural networks ever since. Especially those that are built for FPGAs. Now I know that many optimizations can be made on that neuron. Unfortunately I have used the schematic in WebPack 4 or 5 and it is not very comfortable to work with that scheme. Maybe some day I will rewrite (and optimize) the code in Verilog...

I you have some additional e-information (solutions, comparisons, everything), I will be glad to hear.

Cheers,

Yassen

Yassen

- Yassen
**Posts:**70**Joined:**Thu Jun 08, 2006 6:46 pm

I am very interested to see your project. I am doing a 5000 level paper on benchmarking a particular neural network on FPGA, serial computer, and the best approach, mixed signal verilog-AMS. I have to do more research to find out what the most feasible neural network design is for FPGA, maybe I will analyze several. The serial computer and verilog-AMS already have these benchmarks and much previous work to look at.

So far this source is the most in-depth about neural networks in FPGA, still looking though.

[2] J. Zhu and P. Sutton, "FPGA Implementations of Neural Networks- a Survey of a Decade of Progress," Field-Programmable Logic and Applications, Ed. P. Y. K. Cheung, G. A. Constantinides and J. T. de Sousa, Berlin: Springer-Verlag, 2003, pp. 1062-66.

So far this source is the most in-depth about neural networks in FPGA, still looking though.

[2] J. Zhu and P. Sutton, "FPGA Implementations of Neural Networks- a Survey of a Decade of Progress," Field-Programmable Logic and Applications, Ed. P. Y. K. Cheung, G. A. Constantinides and J. T. de Sousa, Berlin: Springer-Verlag, 2003, pp. 1062-66.

- outer_space2
**Posts:**51**Joined:**Sun Oct 09, 2005 1:05 pm

Hi,

Sorry for my very late reply. If you are still interested...

I cannot post the source of that neuron because of 3 main reasons:

1. It is made using schematics on WebPack 5.1 that is pretty old.

2. I implemented the design with another guy that do not like to share it I'm sorry for that.

3. I don't like very much the design because it uses some 'not very smart' technics (when speaking in terms contemporary FPGAs) - fox example - it implement the activation function using logic gates instead of using the blockRAM of the SpartanII.

I can post you a block diagram of the design that implements a Backpropagation artificial neuron.

(click on 'Request Download Link' at top left and then on 'Download File' at top left or I can send you via e-mail)

On the picture:

The neuron uses 8bit numbers with the MSB=sign bit.

- in1, in2 - the two inputs of the neuron;

- w11, w12 - the two weights of the first layer;

- b - the bias;

To work easier, the '2'sC' blocks are 'Two's Complement' blocks that produce the inverse code of the numbers. These blocks are shown for clarity, but are in fact included in the adder blocks.

The activation function here is a tansigmoid, but also other functions can be implemented.

I would like to re-write the entire design to use the blockRAM and the integrated multipliers of Spartan (probably 3 or 3E) chip, but at present have absolutely no time to do it.

Another think I truly wish to try is to implement a real fuzzy controller on CPLD or FPGA - I think it will perfectly fit. I'm looking for additional information, and need to find some free time to experiment...

Sorry for my very late reply. If you are still interested...

I cannot post the source of that neuron because of 3 main reasons:

1. It is made using schematics on WebPack 5.1 that is pretty old.

2. I implemented the design with another guy that do not like to share it I'm sorry for that.

3. I don't like very much the design because it uses some 'not very smart' technics (when speaking in terms contemporary FPGAs) - fox example - it implement the activation function using logic gates instead of using the blockRAM of the SpartanII.

I can post you a block diagram of the design that implements a Backpropagation artificial neuron.

(click on 'Request Download Link' at top left and then on 'Download File' at top left or I can send you via e-mail)

On the picture:

The neuron uses 8bit numbers with the MSB=sign bit.

- in1, in2 - the two inputs of the neuron;

- w11, w12 - the two weights of the first layer;

- b - the bias;

To work easier, the '2'sC' blocks are 'Two's Complement' blocks that produce the inverse code of the numbers. These blocks are shown for clarity, but are in fact included in the adder blocks.

The activation function here is a tansigmoid, but also other functions can be implemented.

I would like to re-write the entire design to use the blockRAM and the integrated multipliers of Spartan (probably 3 or 3E) chip, but at present have absolutely no time to do it.

Another think I truly wish to try is to implement a real fuzzy controller on CPLD or FPGA - I think it will perfectly fit. I'm looking for additional information, and need to find some free time to experiment...

Cheers,

Yassen

Yassen

- Yassen
**Posts:**70**Joined:**Thu Jun 08, 2006 6:46 pm

Hello, Yassen!

I'm just starting to work on my Bachelor thesis that wil implement either celular automata or neuron in FPGA (Spartan 3E).

I was wondering if you could point me towards some good literature regarding neurons in FPGAs so i can shorten my time of research?

Take care,

Matej

(matej.gutman@gmail.com)

I'm just starting to work on my Bachelor thesis that wil implement either celular automata or neuron in FPGA (Spartan 3E).

I was wondering if you could point me towards some good literature regarding neurons in FPGAs so i can shorten my time of research?

Take care,

Matej

(matej.gutman@gmail.com)

- neuro
**Posts:**7**Joined:**Fri Jul 04, 2008 9:22 am

Hi, Matej,

Since this is your Bachelor thesis, you do not need to dig very deep in the theory of neural networks - they are so many kinds and so many tasks can be solved by their aid...

There is a book I can recommend to you: It is called "FPGA Implementations of Neural Networks", AMOS R. OMONDI, JAGATH C. RAJAPAKSE, Springer, 2006.

Another source of information are:

- some general information on neural networks like the "Neural Networks Toolbox User Guide" for Matlab by Mathworks.

- the site "gigapedia.org", where you can find many many books (note: you have to register there (free) to be able to access the books and also to perform "Item search" and not "Google search").

But the most important thing I would recommend to you is not to think about some complex network, but to try to implement something simple like a single neuron that can solve some real problem.

To implement an artificial neuron in an FPGA you will need:

- some multipliers - use them to multiply the neuron inputs with the weights coefficients (you know you can tune your neuron by varying those weghts).

- an adder - use it as the "cell body" - this adder will sum all the inputs (already multiplied with the corresponding weights) and eventually the bias.

- a memory - use the embedded Block RAM in ROM mode to implement the activation function (a sigmoid for example). The Block RAM will act actually as a look-up table where the input to the function will be the address of the Block RAM and the RAM contents will be the result.

The FPGAs like Spartan 3E have some 18x18 embedded multipliers and also a Block RAM.

I would suggest to you to implement an 8-bit signed arithmetic (don't forget to use two's complements) to make for example a perceptron. Single layer perceptrons are only capable of learning linearly separable patterns. For more information on the perceptrons, start from here: http://en.wikipedia.org/wiki/Perceptron.

Hope this helps.

Good luck

Since this is your Bachelor thesis, you do not need to dig very deep in the theory of neural networks - they are so many kinds and so many tasks can be solved by their aid...

There is a book I can recommend to you: It is called "FPGA Implementations of Neural Networks", AMOS R. OMONDI, JAGATH C. RAJAPAKSE, Springer, 2006.

Another source of information are:

- some general information on neural networks like the "Neural Networks Toolbox User Guide" for Matlab by Mathworks.

- the site "gigapedia.org", where you can find many many books (note: you have to register there (free) to be able to access the books and also to perform "Item search" and not "Google search").

But the most important thing I would recommend to you is not to think about some complex network, but to try to implement something simple like a single neuron that can solve some real problem.

To implement an artificial neuron in an FPGA you will need:

- some multipliers - use them to multiply the neuron inputs with the weights coefficients (you know you can tune your neuron by varying those weghts).

- an adder - use it as the "cell body" - this adder will sum all the inputs (already multiplied with the corresponding weights) and eventually the bias.

- a memory - use the embedded Block RAM in ROM mode to implement the activation function (a sigmoid for example). The Block RAM will act actually as a look-up table where the input to the function will be the address of the Block RAM and the RAM contents will be the result.

The FPGAs like Spartan 3E have some 18x18 embedded multipliers and also a Block RAM.

I would suggest to you to implement an 8-bit signed arithmetic (don't forget to use two's complements) to make for example a perceptron. Single layer perceptrons are only capable of learning linearly separable patterns. For more information on the perceptrons, start from here: http://en.wikipedia.org/wiki/Perceptron.

Hope this helps.

Good luck

Cheers,

Yassen

Yassen

- Yassen
**Posts:**70**Joined:**Thu Jun 08, 2006 6:46 pm

Hi Yassen!

Thank you for information. I will look deeper at the literature you've recomended.

At the moment i'm deciding weather to go in direction of cellular automata or perceptron. Either way - i will post questions if something will become unclear to me.

Take care,

Matej

Thank you for information. I will look deeper at the literature you've recomended.

At the moment i'm deciding weather to go in direction of cellular automata or perceptron. Either way - i will post questions if something will become unclear to me.

Take care,

Matej

- neuro
**Posts:**7**Joined:**Fri Jul 04, 2008 9:22 am

hej yassen,

i m starting to do a project about neural networks using fpga, which includes listening and recording signals in a buffer and if the amplitude reaches a value it will record the signal with time stamps.

could you please offer some guides as i nearly don't know anything about fpga. and can i get your email adress?

cheers

/emre

(emrekush@gmail.com)

i m starting to do a project about neural networks using fpga, which includes listening and recording signals in a buffer and if the amplitude reaches a value it will record the signal with time stamps.

could you please offer some guides as i nearly don't know anything about fpga. and can i get your email adress?

cheers

/emre

(emrekush@gmail.com)

- emre.kus
**Posts:**1**Joined:**Mon Oct 13, 2008 12:40 pm

Hi, Emre,

1. The best starting points for FPGAs are:

- this site (FPGA4FUN) - study it carefully!

- the book "Digital Systems Design with FPGAs and CPLDs" by Ian Grout (buy it form Amazon or download it from Gigapedia) - a very good introduction to digital design for programmable logic.

- you can also try browsing for some books on the topic "Digital System Design with CPLDs" in Gigapedia - a great resource of books.

2. About the Neural networks:

- study carefully the Matlab tutorials about neural networks. I highly recommend you to first model your system in Matlab.

- try reading the book "FPGA Implementations of Neural Networks", by Amos, Omondi, Jagath and Rajapakse (see my earlier post here!).

3. If you are not so familiar with HDL languages I would recommend you to try the Xilinx System Generator that is used with Matlab. You can build your model in Simulink using solely the blocks from the Xilinx Blockset.

The good - you can compile your model directly into HDL code and even a bitstream for the FPGA and download directly to the FPGA without knowing almost nothing about FPGAs.

The bad - Some experience with Matlab, Digital Logic and Number systems is needed. And... unfortunately the Xilinx System Generator is a payed product. You can try it for free for 60 days or you can use it in your university if they own it.

Next... you know Start build separate modules step-by-step, test them thoroughly, try to download to an FPGA...

Good luck!

Yassen

1. The best starting points for FPGAs are:

- this site (FPGA4FUN) - study it carefully!

- the book "Digital Systems Design with FPGAs and CPLDs" by Ian Grout (buy it form Amazon or download it from Gigapedia) - a very good introduction to digital design for programmable logic.

- you can also try browsing for some books on the topic "Digital System Design with CPLDs" in Gigapedia - a great resource of books.

2. About the Neural networks:

- study carefully the Matlab tutorials about neural networks. I highly recommend you to first model your system in Matlab.

- try reading the book "FPGA Implementations of Neural Networks", by Amos, Omondi, Jagath and Rajapakse (see my earlier post here!).

3. If you are not so familiar with HDL languages I would recommend you to try the Xilinx System Generator that is used with Matlab. You can build your model in Simulink using solely the blocks from the Xilinx Blockset.

The good - you can compile your model directly into HDL code and even a bitstream for the FPGA and download directly to the FPGA without knowing almost nothing about FPGAs.

The bad - Some experience with Matlab, Digital Logic and Number systems is needed. And... unfortunately the Xilinx System Generator is a payed product. You can try it for free for 60 days or you can use it in your university if they own it.

Next... you know Start build separate modules step-by-step, test them thoroughly, try to download to an FPGA...

Good luck!

Yassen

- Yassen
**Posts:**70**Joined:**Thu Jun 08, 2006 6:46 pm

Hi Yassen!

I've decided to make a neural network, that will learn simple logical function provided by user (AND, OR, ...).

I've managed to simulate it with Matlab using fixed point arithmetic. Here are the facts regarding VHDL :

- I've managed to implement LUT for tansigmoid function, and since it's a simple ROM it does not use much of resources.

- I've managed to make some kind of fast ‘multiplier-adder’ to multiply input with weights and sum them.

- My implementation (i'm using xilinx ISE; target is spartan3E) is using embedded 18x18 multipliers in spartan.

Now, the problem is, that my Spartan only has 20 multipliers. I know, that it is more than enough for a small network that is only learning some simple logical function , but i'll try to expand my problem to 8x8 matrix number recognition. In that case 20 multipliers is not enough.

As far as i can see i have two possibilities :

1) Use max number of multipliers (im my case 20) to make some kind of component, that just multiplies and sums all the numbers (let’s call it ‘mulsum’ component ). If problem requires larger neural network, I just feed my ‘mulsum’ component with different parts of neural network.

Good side of this kind of implementation would be simplicity. Bad side would be sequential calculation, that just does not fit with neural network philosophy. Also, there is a clock problem. So far I’ve implemented 16 embedded multipliers and my clock went down to 48mhz! I’m obviously loosing some points here.

2) Use self-made multiplier. Booth multiplier for example. Every input/weight would have one.

Down side of this kind of implementation would be my lack of knowledge of multipliers. I don’t know how complex they are, how much gates they consume, what are the timings of this kind of multipliers,…

Do you have any thoughts on that? Do you have any experiences / literature on making multipliers from scratch?

Best regards,

Matej Gutman

I've decided to make a neural network, that will learn simple logical function provided by user (AND, OR, ...).

I've managed to simulate it with Matlab using fixed point arithmetic. Here are the facts regarding VHDL :

- I've managed to implement LUT for tansigmoid function, and since it's a simple ROM it does not use much of resources.

- I've managed to make some kind of fast ‘multiplier-adder’ to multiply input with weights and sum them.

- My implementation (i'm using xilinx ISE; target is spartan3E) is using embedded 18x18 multipliers in spartan.

Now, the problem is, that my Spartan only has 20 multipliers. I know, that it is more than enough for a small network that is only learning some simple logical function , but i'll try to expand my problem to 8x8 matrix number recognition. In that case 20 multipliers is not enough.

As far as i can see i have two possibilities :

1) Use max number of multipliers (im my case 20) to make some kind of component, that just multiplies and sums all the numbers (let’s call it ‘mulsum’ component ). If problem requires larger neural network, I just feed my ‘mulsum’ component with different parts of neural network.

Good side of this kind of implementation would be simplicity. Bad side would be sequential calculation, that just does not fit with neural network philosophy. Also, there is a clock problem. So far I’ve implemented 16 embedded multipliers and my clock went down to 48mhz! I’m obviously loosing some points here.

2) Use self-made multiplier. Booth multiplier for example. Every input/weight would have one.

Down side of this kind of implementation would be my lack of knowledge of multipliers. I don’t know how complex they are, how much gates they consume, what are the timings of this kind of multipliers,…

Do you have any thoughts on that? Do you have any experiences / literature on making multipliers from scratch?

Best regards,

Matej Gutman

- neuro
**Posts:**7**Joined:**Fri Jul 04, 2008 9:22 am

Hi fpga4fun.com

Since FPGA's reconfigurability is ideal for implementing an Artificial Neural Network, I would like to know if its possible to implement a simple ANN using the Xilinx Spartan 3E dev. board.

http://www.digilentinc.com/Products/Det ... d=S3EBOARD

So,

Can a Spartan 3E FPGA kit be used to implement a single neuron ANN?

In general how many gates are required to implement a simple neural net?

For example, a two-input perceptron?

Thanks

Since FPGA's reconfigurability is ideal for implementing an Artificial Neural Network, I would like to know if its possible to implement a simple ANN using the Xilinx Spartan 3E dev. board.

http://www.digilentinc.com/Products/Det ... d=S3EBOARD

So,

Can a Spartan 3E FPGA kit be used to implement a single neuron ANN?

In general how many gates are required to implement a simple neural net?

For example, a two-input perceptron?

Thanks

- haastheone
**Posts:**4**Joined:**Tue Jan 27, 2009 4:30 pm

Hi!

Yes, it is possible to do simple NN with that board. I have similar board (500k gates) and i've managed to implement NN that recognizes 6x5 matrix of symbols. Currently it executes only, but i'm planning to put learning (back - propagation rule) into chip as well (currently i load weigths that i've calculated in Matlab simulation).

Logic cells count vary A LOT, depending on what you do. Here's what i did:

- implemented BIG ALU that multiplies and summs 32 numbers (a major speed-up!). Since your chip has only 20 ASIC multipliers, you will burn about 50% of logic cells if you use that speed-up (for 12 multipliers that are missing + adders).

- All weigths are implemented in BRAM, so no FLIP-FLOPs are used! Now i can even upscale my NN a lot!

- everything is written with state machines. They dont use much resources (just a few flip-flops)

Take care,

Matej

Yes, it is possible to do simple NN with that board. I have similar board (500k gates) and i've managed to implement NN that recognizes 6x5 matrix of symbols. Currently it executes only, but i'm planning to put learning (back - propagation rule) into chip as well (currently i load weigths that i've calculated in Matlab simulation).

Logic cells count vary A LOT, depending on what you do. Here's what i did:

- implemented BIG ALU that multiplies and summs 32 numbers (a major speed-up!). Since your chip has only 20 ASIC multipliers, you will burn about 50% of logic cells if you use that speed-up (for 12 multipliers that are missing + adders).

- All weigths are implemented in BRAM, so no FLIP-FLOPs are used! Now i can even upscale my NN a lot!

- everything is written with state machines. They dont use much resources (just a few flip-flops)

Take care,

Matej

- neuro
**Posts:**7**Joined:**Fri Jul 04, 2008 9:22 am

Thanks Matej for your kind response.

I'm kinda new to implementing ANNs on FPGAs.

So could you kindly direct me to some resources to help me get started, pls?

Thanq once again.

I'm kinda new to implementing ANNs on FPGAs.

So could you kindly direct me to some resources to help me get started, pls?

Thanq once again.

- haastheone
**Posts:**4**Joined:**Tue Jan 27, 2009 4:30 pm

Hi!

I used a mix of books from field of neural networks and VHDL. There is a book that has few pages dedicated to VHDL and NN :

MIT Press : Circuit Design with VHDL (2004)

I didn't really like their circular buffer implementation, so i made up my own. For more general approach toward neural networks i would recommend :

Ra´ul Rojas : Neural Networks (A Systematic Introduction)

You can actually download (!) the whole book on his page. This is the link :

http://www.inf.fu-berlin.de/inst/ag-ki/ ... tworksBook

Hope this helps. Also - you can search google for various short papers on VHDL and NN. Example :

http://www.ece.neu.edu/students/mfayyaz ... /spain.pdf

You should be aware, that this field of computer science is quite young and there aren't many people trying to implement NN (ecpecially with VHDL). I have (for example) made several simulations for my NN in Matlab and when design was verified, i've started coding in VHDL.

Hope this helps,

Matej

I used a mix of books from field of neural networks and VHDL. There is a book that has few pages dedicated to VHDL and NN :

MIT Press : Circuit Design with VHDL (2004)

I didn't really like their circular buffer implementation, so i made up my own. For more general approach toward neural networks i would recommend :

Ra´ul Rojas : Neural Networks (A Systematic Introduction)

You can actually download (!) the whole book on his page. This is the link :

http://www.inf.fu-berlin.de/inst/ag-ki/ ... tworksBook

Hope this helps. Also - you can search google for various short papers on VHDL and NN. Example :

http://www.ece.neu.edu/students/mfayyaz ... /spain.pdf

You should be aware, that this field of computer science is quite young and there aren't many people trying to implement NN (ecpecially with VHDL). I have (for example) made several simulations for my NN in Matlab and when design was verified, i've started coding in VHDL.

Hope this helps,

Matej

- neuro
**Posts:**7**Joined:**Fri Jul 04, 2008 9:22 am

GURU Matej!

Thanks a million! you're a beacon of light in a dark alley!

They will definitely be of help to me.

From the point of view of someone who has implemented a NN desing on an FPGA, does using MATLAB make the NN design faster or easier?

pls I would like to know more about how you design NNs in MATLAB.

Also, what is the ave. time frame from designing to implementation on the FPGA?

Thanq

Thanks a million! you're a beacon of light in a dark alley!

They will definitely be of help to me.

From the point of view of someone who has implemented a NN desing on an FPGA, does using MATLAB make the NN design faster or easier?

pls I would like to know more about how you design NNs in MATLAB.

Also, what is the ave. time frame from designing to implementation on the FPGA?

Thanq

- haastheone
**Posts:**4**Joined:**Tue Jan 27, 2009 4:30 pm

Hello!

I think that any serious NN design should include matlab (or octave or whatever you like) as well, since debugging NN on FPGA is nearly impossible. After you've verified NN on your favorite tool, you can safely implement it on a VHDL - i dont really see any other fast and efficient way.

There are some NN toolboxes that are availible for matlab, but i use only matrixes.

Timeline is very depended on a type of NN you are implementing, problem you are solving and of coarse your previous experinces (it took me about half of year for simple pattern recognition to start working for me for example).

Take care,

Matej

I think that any serious NN design should include matlab (or octave or whatever you like) as well, since debugging NN on FPGA is nearly impossible. After you've verified NN on your favorite tool, you can safely implement it on a VHDL - i dont really see any other fast and efficient way.

There are some NN toolboxes that are availible for matlab, but i use only matrixes.

Timeline is very depended on a type of NN you are implementing, problem you are solving and of coarse your previous experinces (it took me about half of year for simple pattern recognition to start working for me for example).

Take care,

Matej

- neuro
**Posts:**7**Joined:**Fri Jul 04, 2008 9:22 am

Thanks for your kind response.

Please do you have a 'sample project' that will serve as an introduction/guide to designing in MATLAB and implementing on a FPGA.

Thanks again.

Please do you have a 'sample project' that will serve as an introduction/guide to designing in MATLAB and implementing on a FPGA.

Thanks again.

- haastheone
**Posts:**4**Joined:**Tue Jan 27, 2009 4:30 pm

Hi neuro,

You said that you calculated weights in Matlab then put it in fpga.

The weights that were calculated were in floating point format?

In what format did you put into your Verilog code?

Thank You.

You said that you calculated weights in Matlab then put it in fpga.

The weights that were calculated were in floating point format?

In what format did you put into your Verilog code?

Thank You.

- hani sayuri
**Posts:**1**Joined:**Wed Feb 25, 2009 10:11 pm

Hello again, and apologizes for slow response - it's been a hactic month for me.

- I can provide somekind of 'guidance', if you want, but you will have to PM me. Currently i do not have any 'generic' project, since i've created everything fom scratch.

- Format i've used was integer. I've used octave (same as matlab - but freeware). After NN learned to solve problem, i've extracted knowlege by printing out weigths and putting them in proper format in VHDL. Example :

W_HIDD_6_i <= std_logic_vector(to_signed(-13067, 18 ));

Octave : http://www.gnu.org/software/octave/

Regards,

Matej

- I can provide somekind of 'guidance', if you want, but you will have to PM me. Currently i do not have any 'generic' project, since i've created everything fom scratch.

- Format i've used was integer. I've used octave (same as matlab - but freeware). After NN learned to solve problem, i've extracted knowlege by printing out weigths and putting them in proper format in VHDL. Example :

W_HIDD_6_i <= std_logic_vector(to_signed(-13067, 18 ));

Octave : http://www.gnu.org/software/octave/

Regards,

Matej

- neuro
**Posts:**7**Joined:**Fri Jul 04, 2008 9:22 am

I'm working with System Generator from Xilinx, and in my simulations the values are fine, but when I test the VHDL in Spartan-3E, the negative outputs are incorrect. I'm testing a neuron with tanh.

Hope you guys can help me, thanks in advance and sorry for my english.

My email is zorritozorron@hotmail.com, if any of you wants it

Hope you guys can help me, thanks in advance and sorry for my english.

My email is zorritozorron@hotmail.com, if any of you wants it

- Gulah
**Posts:**1**Joined:**Tue Jul 13, 2010 7:38 pm

21 posts
• Page **1** of **1**

Users browsing this forum: No registered users and 2 guests