What is missing from MATLAB #2 - the next decade edition
Rik
on 31 Jul 2020
Latest activity Reply by Robinson Besson
on 7 Oct 2024 at 18:52
Meta threads have a tendency to grow large. This has happened several times before (the wishlist threads #1 #2 #3 #4 #5, and 'What frustrates you about MATLAB?' #1 and #2).
No wonder that a thread from early 2011 has also kept growing. After just under a decade there are (at time of writing) 119 answers, making the page slow to load and navigate (especially on mobile). So after a friendly nudge; here is a new thread for the things that are missing from Matlab.
Same question: are there things you think should be possible in Matlab, but aren't? What things are possible with software packages similar to Matlab that Matlab would benefit from? (note that you can also submit an enhancement request through support, although I suspect they will be monitoring activity on this thread as well)
What should you post where?
Next Gen threads (#1): features that would break compatibility with previous versions, but would be nice to have
@anyone posting a new thread when the last one gets too large (about 50 answers seems a reasonable limit per thread), please update this list in all last threads. (if you don't have editing privileges, just post a comment asking someone to do the edit)
169 Comments
Might have been requested before : do-while control structures.
There are situation where it make sens to perform the loop's action before testing for it's condition, and in thoses case the do-while structure is more leggible than a while.
E.g. recently I had to pick an element from a list, that matched some condition. So I need to pick an element, then check if it respect the condition, and if needed pick a different element. (yeah, I know I could generate a reduced list containing only valid elements and pick from that ; but it's beside my point here.)
So currently I have two options :
a) repeat the picking instruction :
some_list = 1:10;
i = randi(numel(some_list));
while ~is_valid(some_list(i))
i = randi(numel(some_list));
end
b) introduce some variable
some_list = 1:10;
can_continue = true;
while can_continue
i = randi(numel(some_list));
can_continue = ~is_valid(some_list(i));
end
In my opinion, neither of those are elegant. They can be hard to read if the loop content is particularly large, or the condition complex ; and code repetition makes maintenance harder.
I know it's just syntactic sugar, but it also shouldn't be super hard to implement (in compiled language it could be done throught some pre-compile time macro).
And finaly, I'm not the only one who whish for that control structure :
I'm sure I mentiond this at some point... but it would be useful if you could substitute a logical index for more than one dimension.
For example:
img = imread('flamingos.jpg');
intens = rgb2gray(img);
mask = intens > 128;
newimg = img;
At this point, we would like to be able to use something like
%newimg(mask, 3) = 255 - img(mask, 3);
but that is not going to work, and instead we need to proceed something like
temp = img(:,:,3);
temp(mask) = 255 - temp(mask);
newimg(:,:,3) = temp;
subplot(2,1,1); image(img); title('original');
subplot(2,1,2); image(newimg); title('modified')
Now, there is a way to do it without a temporary variable, but it is ugly...
img = imread('flamingos.jpg');
intens = rgb2gray(img);
mask = intens > 128;
newimg2 = img;
newimg2(find(mask) + 2 * numel(mask)) = 255 - img(find(mask) + 2 * numel(mask));
figure;
subplot(2,1,1); image(newimg2); title('modified -- indexing')
isequal(newimg, newimg2)
An AI copilot feature-option in the IDE, where either matlab has their own model (optimal) with adjustable privacy that users can determine (to avoid possible gdpr etc issues), and/or the capacity to configure the copilot to work with an LLM service of our choice (also possibly run locally).
In the current state of afairs, continuing using an ide with no copilot integration is non-sustainable. Not for the future, for like yesterday. I already feel quite anxious falling behind compared to python users in terms of copilot language-specific support. There is a ton of python stuff out there that LLMs are trained on, and ime existing LLMs are worse in many other languages, including matlab. The direction copilots seem to get is different "specialised experts", and it seems finetuning different models to specific languages each has potential.
It would be great if Mathworks finetuned an LLM, which could, for example, either be some version of GPT3.5/4 that we could then use through an API, or a version of codellama that is (almost) open source that then mathworks itself could provide through an API (as an extra toolbox, or subscription, or whatnot). Or both options (why bet on one horse only?) or something else entirely. Also, it would be great if people who write matlab using other environments like neovim or vs-code had access to that model too. Even better if it is available for people to also run locally, like the codellama derivatives.
Finetuning a model should not be a great deal for a big company like mathworks, people with much less budget do this sort of thing nowadays. Mathworks has access to tons of matlab code they could use, and I cannot see anything putting any great obstacle to that apart from intention. There are a lot of companies and startups right now doing this sort of thing for other languages. But if mathworks does not do that with matlab, I don't know if anybody else will bother with it. So please consider offering something in that direction, because very soon it seems that writing code without option for copilot assistance will not be conceivable.
When defining Abstract methods on a class, I would like to be able to fix the arguments on the abstract class.
Something like this would be usefull:
classdef AbstractSuperClass
methods (Abstract)
function AbstractMethod(self,input1,input2)
arguments
self
input1 (1,1) double
input2 (1,1) double
end
% no function body because it is abstract
end
end
end
Right now, I always end writing two versions of the same method to avoid the need to copy the arguments validation block to the subclass:
classdef AbstractSuperClass
methods
function Method(self,input1,input2)
arguments
self
input1 (1,1) double
input2 (1,1) double
end
abstract_version_of_method(self,input1,input2)
end
end
methods (Abstract,access=protected)
abstract_version_of_method(self,input1,input2)
end
end
but this feels wrong to do it and results in programming errors because I forget what the input signature of the method is, the above syntax would be more elegant and would allow for checking whether the input signature of the implementation of the method follows the definition in the abstract class.
With the new arguments (output), I would even be able to fix the output type of my abstract function
I wish clearvars had an optional "do-nothing" flag a la the /L switch in CMD XCOPY to display the variable that would be cleared with the given variables list...would let one confirm a wildcard expression didn't accidentally wipe out something wished to have kept; particular with the -except clause.
Maybe there's a way I've not found, but I wish arrayfun and cellfun would have automagic argument expansion so one could pass other arguments to the anonymous function without having to replicate them manually to match the others. This would add greatly to the convenience in using either; the present working example happens to be building a set of target range expressions to stuff into a cell array that will be written to Excel although that really has nothing to do with the request/enhancement, just happens to be current time was frustrated that there's no way to pass constants to the anonymous functions to allow them to be generalized.
Example:
Building a variably-sized workbook where it is desirable that the sums over a section of the sheet be formulas rather than the current fixed constant value of the data as the sheet will subsequently be modified by hand; the tool is to build the original working pattern by compending various data sources and arranging for the end user...
It boils down to the point at which one has a set of row range indices and a set of columns over which to build the formula and insert into the cell array which is subsequently written to the workbook. That code looks something like
% anonymous function that builds Excel =SUM(r1:r2) expression for given row range, column
xlsSumRange=@(r1,r2,c)strcat('=SUM(',xlsAddr(r1,c),':',xlsAddr(r2,c),')');
% typical use
col=xlsCol2Col('G'); % another internal translation layer of local array position from Excel column
cOut(isTotal,col)=arrayfun(@(r1,r2,c)xlsSumRange(r1,r2,c),RS1,RS2,repmat('G',numel(RS)-1,1),'UniformOutput',0);
col=xlsCol2Col('K'); % another internal translation layer of local array position from Excel column
cOut(isTotal,col)=arrayfun(@(r1,r2,c)xlsSumRange(r1,r2,c),RS1,RS2,repmat('K',numel(RS)-1,1),'UniformOutput',0);
This works, but the expression "repmat('G',numel(RS)-1,1)" needed is really inconvenient and clutters up the code legibility greatly. I've had any number of similar case in the past where an anonymous function is useful shorthand but then to use more than once requires a workaround like the above.
The alternative is to redefine the anonymous function dynamically and embed the constant inside it for the given invocation.
To visualize, an example output for the above for one invocation looks like for the call with column 'G'
K>> arrayfun(@(r1,r2,c)xlsSumRange(r1,r2,c),RS1,RS2,repmat('K',numel(RS)-1,1),'UniformOutput',0);
ans =
3×1 cell array
{'=SUM($G$3:$G$17)' }
{'=SUM($G$22:$G$372)' }
{'=SUM($G$377:$G$414)'}
K>>
The above could be in a loop over the number of columns instead of using the explicit columns, but was part of still rearranging the output file structure at the time...and, haven't taken the time to clean it all up yet...
read/writetable from/to Excel workbooks should be able to return/write the comment (now called something else, I forget what) field and the formula associated with the cell as well as the value. While one can write COM to do so, it then takes either using COM entirely (can be a lot of work) or have to use both high-level and COM on the same file; either of which options isn't ideal. Sure, one could forego Excel entirely since have MATLAB, but unfortunately can't always have what we want in that regards.
One can mung on the old xlsread/write routine and add stuff into it, but it's also kinda' klunky and while more convenient in some ways as compared to writing all COM totally from scratch isn't nearly as convenient as would be a higher-level interface with the other current toolset.
There's a hook to a function call in xlsread although I was never able to get it to work to do either of the above; because the Excel COM object isn't exposed to the function to be able to make direct access from that function.
"matlab coder" toolbox can't produce only one C/C++ file, refer to:
matlab AXES PROPERTIES missing:
I'm going to be a weirdo and say that I would like it if imshow() supported IA and RGBA inputs. I'm probably the only person who ever uses RGBA workflows in MATLAB, but these are wishes we're talking about. I might as well wish for a function that gives me a sandwich.
I already made my own tool for MIMT, but it kind of drives me up the wall to not be able to use all the conveniences I created when answering questions on the forum. I feel like I'm defeating myself.
Probably it breaks compatibility, but it would make functions/operators svd, *, ', .' dealing with n-d arrays similar to pagesvd, pagemtimes, ... followed by appropriated reshape.
Extend the pagexxx with matrix left/right division, lu, qr, chol, eig, etc...
Being able to access the properties/methods/fields of an object/struct even after indexing into one would be a big deal.
Currently, if you have a struct A, with fields size and count, you could do A. then tab to show the options for the fields. But if you do A(1). you cannot now tab to show the options for the fields. Same with an object. So the discoverability is gone.
I want to be able to index into an object/struct and still have the available fields/properties/methods for that parent show up.
Maintain dimensions when getting a field of a class.
If Data is a 3x2x4 array of objects/structs with a field "foo", then Data.foo should be an 3x2x4 array of the type of "foo". Not a 1x24 array, or is it the other way around.
Fix the semantics of "clear".
"clear all" does not clear all, but "clear classes" does. Go figure. I want a way to clear classes and only classes.
In the code analyzer, there is no easy way to find what #ok directive controls a particular flagged issue.
(And no overall list of the available #ok directives, at least that I can find. Even the enable/disable messages in preferences don't show them.)
I would like imagesc(C) to work with 3 dimensional arrays.
Your TypicalX option for fminunc() (and others?) should be extended beyond usage for gradient evaluation only.
I have used normalized variables for optimization for a long time. By default, I do start with [1,1,...,1] as you do, but for gradient evaluation only. There are many areas where a spread of variables is vast (radiolectronics: kiloohms to nanofarads).
Currently evalin() accepts a context argument (such as 'caller' or 'base' or symengine()), and a character vector that is the command to be executed.
This runs into the same horrors as using eval() .
It is difficult to convince people to give up using eval(); they tend to think that their particular use case makes it necessary (it rarely is!). And the cause is Not Helped At All when we have to say "well, it is true that the closely-related "evalin() is needed sometimes"
It would therefore help if there was a way to do something like evalin() but with a function handle. For example,
evalin('caller', 'whos')
might become something like
evalfcnin('caller', @whos)
To be honest, I do not know how this would work in practice, considering that functions need their own workspace to execute in. Maybe some of the technology behind shared variables and nested functions could be used, so that the function could have read/write access to the designated function workspace and yet still be able to have its own private variables.
Sometimes I want to be able to get a look at persistent variables in a function that is not the caller; I have never found a hint that is possible. But the existence of persistent variables that go away when you "clear" the function, implies that each parsed function already has some kind of workspace associated with it. And sometimes it seems to me that it would be useful if you were able to get a referene to that workspace and tell a function to execute using that workspace. Something that might look like
evalinWorkSpace( matlab.workspace.getinstance.caller, @whos)
Having workspaces as accessible objects leads to some interesting possibilities about saving and restoring state, such as for the purpose of recovering from power failures; it also heads towards possibilities such as co-routines.