As Oleg has mentioned already, the measurement of the code leads to bad programming practize. Therefore the Cody-users learn to use Matlab inefficiently.
The "code-size" system for "measuring" the submitted solution do not reward:
While some hard-working contributors in Answers try to provide as much good programming practice as possible to improve the total quality of the user-land code, Cody encourages to be accustomed to a compact but for larger projects horrible programming style. In consequence I think that Answers and Cody are antagonists.
Imagine I try to employ some programmers for a serious Matlab project. Will I ask the top ten of Answers, FileExchange or Cody?
It is not really clear to me, in which order I should solve the problems. Sometimes the solution of one problem let me look on the solutions of other problems I've solved some days before. This has a limited pedagogical quality only. I can see that there is a very compact solution for a certain problem. To learn how it was implemented, I have to submit any valid solution for another problem. This encourages to submit even bad solutions get the the wanted information.
Currently the method to measure the code-size can easily be confused. What happens, if the actual calculations are embedded as string in the comment section and EVALed? Can we create M-files dynamically, which shadow the functions required for testing the success? How long will it take until somebody breaks out of the sandbox and injects some SQL code?
I see a growing number of droll cheating already. But I see an invitation to (for?) fraud also.
[EDITED] See also: Blogs: Scoring in Cody
I think it's:
EDIT Now it's getting too tempting to brute force the solution.
The voting system is bugged. Once you get it right with any high score, you can fail with a low score and see the solutions of those with lower score than you.
It's good for people who want to improve themselves in a casual manner (e.g. me), but might be bad for people who are obsessed with collecting points.
I found that a lot of the Cody (official) problems after a while turned into a lot of 'regexp' stuff and was a bit too esoteric for me.
At first, looking at other people's solutions was good. Seeing other ways to solve to problems was interesting, but after a while, the 'best' solution just gets saturated with the same thing. An explanation of what is going on in people's code would be helpful.
Have the 'best' solution be voted by players. Perhaps only allow a player's first successful submission be shown (or branch further optimizations), so that people don't just copy the 'best' and resubmit.
I like Cody as a set of brain teasers that may force you to think about fairly unique problem that you may not have encountered before. I would echo what has been said about the rating system. On the surface, it seems very good to be able to write just a few lines of code to do a task, but like Jan has said, it produces code that may be less efficient and that is usually not very readable.
Cody also allows the user to see what tests their codes will encounter if they fail the first time. I feel like this does not make the programmer write general enough functions, and allows them to just write a function that solves that SPECIFIC problem. My solution to this would be to have many different tests and only use 2 or 3 for any given check.
OK, so you know that I'm biased, but let me be clear that I'm stating my personal opinion here, not that of my corporate overlords.
I don't see Cody as antithetical or antagonistic to Answers. I see them as complementary, serving different purposes. I think it's important to see Cody for what it is: a game. A fun way to flex your MATLAB muscles.
For me in my professional life, I see Cody as a great resource for those learning MATLAB. A few minutes a day on Cody would, I think, help a new user develop and solidify their skills. Once they are writing serious code for a specific purpose, I'd expect them to use Answers as a resource for getting help with specific difficulties.
I understand, and share, the concern about teaching bad practices. However, I'm not worried about things like checking input types and dimensions, dealing with NaNs, and so on. Again, I don't see Cody as a way to develop serious MATLAB programming or application development skills. I see it as a way to develop MATLAB language skills. It's a game, and part of the rules of the game is defining what the input to this function might be. What does concern me with the scoring is that inefficient or inelegant approaches may score the best. My canonical example for that is the "pyramid number" problem, where sum(1:n) is "better" than n*(n+1)/2. FWIW, the Cody team are aware of these issues.
One specific quibble: as others have noted, there's a lot of regexp, which, to me, is not really why MATLAB is so awesome.
Occasionally I've seen some neat tricks that I may or may not tuck away in some spare neurons for future reference.
My first experience with CODY:
Tried the "sum integers from 1 to 2^n" problem. Example used only a scalar input, but I coded up a vectorized version anyway and submitted it. It wound up in 139th place according to size. Hmmm...
So I looked at the solutions ... WHOOPS! It wouldn't let me. I had to solve another problem just to gain the right to look at the other solutions to the first problem I already solved. OK, so a bit of wasted time but I finally got permission to look at the solutions. Here is what I found, of the 184 correct solutions all ranked on size:
Solutions #1-#93 all use slight variations of sum(1:2^n). Nice, compact, short, ... and horrible programming. Forming the explicit integer array up to 2^n is a bad use of time and resources to solve this problem.
Solution #94 on the list is the first one that goes directly after the solution based on a closed form expression instead of forming the integer array explicitly. It is vectorized, but it suffers from calling the exponentation twice. Well, at least we are getting significantly better than the 1:2^n approach, but we had to wade through several pages of the 1:2^n stuff to get here ... not good.
Solution #106 on the list is the first one to use the pow2 function, but it still calls it twice.
Solution #121 on the list is the first one that calls the exponentiation ^ only once, but it is not vectorized.
My solution, #139 on the list, is the first one that gets the answer via the closed form expression, does the exponentiation only once, and is vectorized. In fact, after scanning all of the solutions mine was the only one that had all three of these features. (One would have gotten to it quicker by scanning the list of 184 correct solutions in reverse order.) I'm pretty sure that it would have wound up in dead last place had I included any argument checking.
Well, so what? CODY is being touted as just a game by TMW, not a serious programming aid. On that level, fine, I suppose. And maybe as a teaching aid in MATLAB syntax or as a general introduction to functions you didn't know about, it is OK as well. But as an aid to good programming practice I think it fails, unless there are comments to go along with the solutions.
Q: Does TMW want comments on bad programming practice etc to appear on the leading solutions? I.e., is this really just a game or does TMW want CODY to be used as a legitimate programming aid as well?
I tried Cody when it first came out. At that time it appeared to be about two things: solving problems and writing "good" code. After I submit, I can see if I solved a problem and if not, where I failed. I like that, so the solving problems part is fine. For the "good" code part Cody gives me a seemingly random number. I don't like playing games in which I do not know what I am being scored on.
Then I saw http://blogs.mathworks.com/desktop/2012/02/06/scoring-in-cody/#comment-8624 and read in the about cody section: http://www.mathworks.co.uk/matlabcentral/about/cody/ that I could see the scoring code. Okay, this sound fun, I may disagree with there definition of "good", but at least I can read the rules.
The rules are basically:
x = length(mtree(answer))
sounds good, except that most of the work in mtree appears to depend on the compiled closed source built-in mtreemex. So the rules apparently are written down, but are not allowed to be read. That to me is the definition of a stupid game.
I find myself in the past few days exploring Cody, rather than providing answers on "answers". Probably not a great drawback to answers, but I do feel a little guilty for not providing help where I can.
I like seeing Cody solutions to problems that are elegant, that I would not have thought of. Kind of a mini contest primer.
Now if I can just hunker down and improve my regexp() skills :(
Once uppon a time there were irregularly golfing contest problems on CSSM. I found those fun, inspirational and occasionally educational. There nothing was hidden behind any fancy locks and organized scoring, but worked just the same or more efficiently if learning is concerned.
Cody seems a total waste.
I just wrote a longish answer but lost it when the site went down briefly. The gist was that I think Cody is entertaining, a bit educational, but not to be taken too seriously.
However, I've noticed that solutions are now appearing that use evalin to hack the answer. This is so boring! I think it's vital that eval, evalin and assignin are banned. 'Solutions' that use these spoil the fun.
It would be helpful to know what toolboxes are available to solution code. I'm sure I saw a solution that used a function from the stats toolbox (which I don't have) but functions from the Image Processing Toolbox aren't found. Should be all or none, or perhaps a definite list with some rationale behind it.