This toolbox contains the implementation of what I consider to be fundamental algorithms
for non-smooth convex optimization of structured functions. These algorithms might not be the fasted
(although they certainly are quite efficient), but they all have a simple implementation in term
of black boxes (gradient and proximal mappings, given as callbacks). However, you should have
some knowledge about what is a gradient operator and a proximal mapping in order to be able
to use this toolbox on your own problems. I suggest you have a look at the
"suggested readings" for some more information about all this.
it's better if you publish your references
Hi, Gabriel. Have you published papers related to this toolbox? Would you please give me some ideas about how to choose the lambda in your toolbox. thank you.
Thanks very much
some files are lost such as Homotopie2.m
you'd better update your file.
it will be better if your zip contains a file of description
Is there any GUI associated with this?
Hi, Gabriel your work is really interesting.
Actually, I am working with unmixing techniques for images and I want to apply sparse positive matrix factorization. I find your code could be very useful to me but, I do not have clear how to use the code. Can you help me sending me maybe a readme file with more details. I am really excited to use these functions.
There are some file lost in you code.Please give a whole code.
Hello Gabriel, Thanks very much for your code. But it will be better to provide a readme file with more detailed description.
Gabriel Peyré ,you very good !
just one year!,you havedone a lot of work on
Totally changed the toolbox to contain only optimization codes.
Update of Licence
Fixed a few bugs.
Dictionary learning with missing data, learning of orthogonal dictionaries.
Enhanced dictionary and signature learning.
Improvement of the dictionary learning method.