Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
How do I compress an array of floating numbers in Matlab?

Subject: How do I compress an array of floating numbers in Matlab?

From: Luna Moon

Date: 2 Apr, 2010 19:50:45

Message: 1 of 17

Hi all,

I have a vector of real numbers in Matlab. How do I compress them? Of
course this has to be lossless, since I need to be able to recover
them.

The goal is to study the Shannon rate and entropy of these real
numbers, so I decide to compress them and see how much compression
ratio I can have.

I don't need to write the result into compressed files, so those
headers, etc. are just overhead for me which affect me calculating the
Entropy... so I just need a bare version of the compress ratio...

Any pointers?

Thanks a lot!

Subject: How do I compress an array of floating numbers in Matlab?

From: John

Date: 2 Apr, 2010 20:06:37

Message: 2 of 17

On Apr 2, 3:50 pm, Luna Moon <lunamoonm...@gmail.com> wrote:
> Hi all,
>
> I have a vector of real numbers in Matlab. How do I compress them?  Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...
>
> Any pointers?
>
> Thanks a lot!

Consider the array of numbers in binary form. Rearrange the bits so
all the ones are sequential, and do the same for the zeros. The number
of ones followed by the number of zeros is your compressed file.

John

Subject: How do I compress an array of floating numbers in Matlab?

From: Roger Stafford

Date: 2 Apr, 2010 20:32:06

Message: 3 of 17

Luna Moon <lunamoonmoon@gmail.com> wrote in message <205a603e-cc38-4088-8d39-5d5b8464abf7@d34g2000vbl.googlegroups.com>...
> Hi all,
>
> I have a vector of real numbers in Matlab. How do I compress them? Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...
>
> Any pointers?
>
> Thanks a lot!

  Unless your vector has many repetitions or consists of quantities with many trailing zeros in their binary floating point form, (or is of astronomically large size,) I would not expect lossless compression to have much success. Usually the 53-bit significands of a collection of non-integer floating point numbers are mostly different and the only area where compression is likely to succeed lies in their 11 bits of exponent which tend to be concentrated in a limited area of the 2048 possibilities.

Roger Stafford

Subject: How do I compress an array of floating numbers in Matlab?

From: robert bristow-johnson

Date: 3 Apr, 2010 22:59:24

Message: 4 of 17

On Apr 2, 3:50 pm, Luna Moon <lunamoonm...@gmail.com> wrote:
> Hi all,
>
> I have a vector of real numbers in Matlab. How do I compress them?  Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...
>
> Any pointers?
>

do you know about Huffman coding? it's in Wikipedia.

if the floating-point numbers are sorta random, not derived from a
"normal-looking" signal, there is not much you can do to compress. if
the range of the numbers are limited (at least probabilistically) then
Huffman coding might help a little. but i tend to think that the it
would be only the exponent bits that would be compressible and there
is not much to gain, since the exponent bits are a small portion of
the floating-point word. the mantissa bits will look pretty random,
and there is not much a lossless scheme can do about that.

if the signal is reasonably bandlimited, you can use LPC, predict the
next samples (from the previous N samples), and encode the
*difference* between the predicted value and what you really have. if
the prediction is good, the difference should be small and the number
of bits needed to represent it should be small (and you might Huffman
code those).

i know for audio, lossless compression doesn't gain a lot of saving of
space. it might save maybe 50%.


> Thanks a lot!

FWIW,

r b-j

Subject: How do I compress an array of floating numbers in Matlab?

From: Tim Wescott

Date: 3 Apr, 2010 23:05:36

Message: 5 of 17

Luna Moon wrote:
> Hi all,
>
> I have a vector of real numbers in Matlab. How do I compress them? Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...
>
> Any pointers?

Find another approach to getting an answer, maybe.

First, most lossless compression algorithms are designed for things like
text, executables, and data bases -- they don't do well with floating
point numbers, tending to see them as "random" even when they're not.

Second, if you measure a bunch of meaningless white noise and put the
result into floating point numbers, then put them into a lossless
algorithm that _can_ handle floating point, it's not going to compress
at all, because the algorithm can't distinguish between white noise and
a signal that's chock-full of information. In effect you'll have
_given_ it a signal full of information, in great detail, about the noise.

I think you're leading yourself down the garden path.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com

Subject: How do I compress an array of floating numbers in Matlab?

From: Mark Shore

Date: 4 Apr, 2010 00:29:05

Message: 6 of 17

Luna Moon <lunamoonmoon@gmail.com> wrote in message <205a603e-cc38-4088-8d39-5d5b8464abf7@d34g2000vbl.googlegroups.com>...
> Hi all,
>
> I have a vector of real numbers in Matlab. How do I compress them? Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...
>
> Any pointers?
>
> Thanks a lot!

An exceeding simple test involving little or no effort on your part would be to take representative binary files and compress them with off-the-shelf utilities such as WinZip or 7-Zip.

This would certainly give you some idea of what level of lossless compression you can expect from reasonably well-tested and mature algorithms before you try to adapt your own.

Subject: How do I compress an array of floating numbers in Matlab?

From: Luna Moon

Date: 4 Apr, 2010 15:27:23

Message: 7 of 17

On Apr 3, 8:29 pm, "Mark Shore" <msh...@magmageosciences.ca> wrote:
> Luna Moon <lunamoonm...@gmail.com> wrote in message <205a603e-cc38-4088-8d39-5d5b8464a...@d34g2000vbl.googlegroups.com>...
> > Hi all,
>
> > I have a vector of real numbers in Matlab. How do I compress them?  Of
> > course this has to be lossless, since I need to be able to recover
> > them.
>
> > The goal is to study the Shannon rate and entropy of these real
> > numbers, so I decide to compress them and see how much compression
> > ratio I can have.
>
> > I don't need to write the result into compressed files, so those
> > headers, etc. are just overhead for me which affect me calculating the
> > Entropy... so I just need a bare version of the compress ratio...
>
> > Any pointers?
>
> > Thanks a lot!
>
> An exceeding simple test involving little or no effort on your part would be to take representative binary files and compress them with off-the-shelf utilities such as WinZip or 7-Zip.
>
> This would certainly give you some idea of what level of lossless compression you can expect from reasonably well-tested and mature algorithms before you try to adapt your own.


Thanks a lot folks.

Please remember the goal is not to compress the floating numbers per
se. It's actually to measure the entropy of the data.

I don't really care how much compression it can maximally achieve.

Using WinZip is a great idea, however, I am looking for

(1) a command inside Matlab;
(2) a bare-bone compression, without the header info, etc. in Winzip,
because those are overheads in terms of measuring entropy...

Any more thoughts?

Thank you!

Subject: How do I compress an array of floating numbers in Matlab?

From: robert bristow-johnson

Date: 4 Apr, 2010 16:06:01

Message: 8 of 17

On Apr 4, 11:27 am, Luna Moon <lunamoonm...@gmail.com> wrote:
> On Apr 3, 8:29 pm, "Mark Shore" <msh...@magmageosciences.ca> wrote:
>
>
>
> > Luna Moon <lunamoonm...@gmail.com> wrote in message <205a603e-cc38-4088-8d39-5d5b8464a...@d34g2000vbl.googlegroups.com>...
> > > Hi all,
>
> > > I have a vector of real numbers in Matlab. How do I compress them?  Of
> > > course this has to be lossless, since I need to be able to recover
> > > them.
>
> > > The goal is to study the Shannon rate and entropy of these real
> > > numbers, so I decide to compress them and see how much compression
> > > ratio I can have.
>
> > > I don't need to write the result into compressed files, so those
> > > headers, etc. are just overhead for me which affect me calculating the
> > > Entropy... so I just need a bare version of the compress ratio...
>
> > > Any pointers?
>
> > > Thanks a lot!
>
> > An exceeding simple test involving little or no effort on your part would be to take representative binary files and compress them with off-the-shelf utilities such as WinZip or 7-Zip.
>
> > This would certainly give you some idea of what level of lossless compression you can expect from reasonably well-tested and mature algorithms before you try to adapt your own.
>
> Thanks a lot folks.
>
> Please remember the goal is not to compress the floating numbers per
> se. It's actually to measure the entropy of the data.
>
> I don't really care how much compression it can maximally achieve.
>
> Using WinZip is a great idea, however, I am looking for
>
> (1) a command inside Matlab;
> (2) a bare-bone compression, without the header info, etc. in Winzip,
> because those are overheads in terms of measuring entropy...
>
> Any more thoughts?

the entropy is the mean number of bits of information (bits as in
information theory) of the messages. certainly the least significant
bits of the words will be nearly completely random. let's say each
word is N bits (N is likely 32 or 64) and you decide that the bottom M
bits is completely random crap (it might not be, if you have a lot of
simple fractions like 1/2 and 1/4 etc).

so make a histogram. ignoring the least significant M bits (you'll
have to decide what M is) count the number of occurrences of every
sample value. for a specific value V, the information content of the
top N-M bits is -log2(p(V)) where p(v) is the probability of any
sample taking on value V. you get that by "frequency of occurrence",
divide the number of occurrence by the total number of occurrences.
the entropy is the mean information content:


    SUM{ p(V) * (-log2(p(V)) } + M
     V

you can try setting M to zero, but you'll have a helluva lot of bins
in your histogram.

in fact, what i would do is compute this entropy for a variety of
different values for M (that your memory allows for) and see if the
entropy changes much.

if you have a lot of simple fractions in your mantissa, then i was
wrong in assuming you can assume the lower M bits are fully randomly
scrambled. but you don't practically have 2^32 number of bins for
your histogram, so something must be done. maybe split the upper and
lower portions of the floating-point word into two messages (that we
hope are independent) and run a histogram on both.

i have never done bit-masking in MATLAB. you'll have to figger out
how to do that.

r b-j

Subject: How do I compress an array of floating numbers in Matlab?

From: Glen Herrmannsfeldt

Date: 4 Apr, 2010 17:58:29

Message: 9 of 17

In comp.dsp Luna Moon <lunamoonmoon@gmail.com> wrote:
(snip)
 
> Please remember the goal is not to compress the floating numbers per
> se. It's actually to measure the entropy of the data.
 
> I don't really care how much compression it can maximally achieve.

(snip)

If you can find the (low) entropy then you can compress the data.
The hard part, usually, is finding it. For an array of floating
point numbers it seems, most likely, that you would find it in
terms or repititions. That is, other places in the file with exactly
the same value. Other than that, it will be hard to find unless
you know the source.

Say, for example, you have a file of sin(n) (in radians) for integer n
from zero to (some large number). Now, that has fairly low entropy
with the assumption that you have a good sin() routine available, but
it will be difficult for a program that doesn't know that the file
is likely to have sin(n) in it to find it.

If someone tries a Fourier transform on the data then they might
discover the pattern. As the result might not be exact, one would
code an approximation and then list the (must smaller) difference
between the two data sets.

Continuing, the output of a linear-congruential random number
generator is also easy to predict if you know the constants of
the generator. If you don't, and you have a big enough sample,
then you can likely find the pattern. (If you have the bits
exactly, though I am not sure how long it would take.)

If you have, say, sin() of the linear-congruential number
stream then it is likely much more difficult.

-- glen

Subject: How do I compress an array of floating numbers in Matlab?

From: Mark Shore

Date: 4 Apr, 2010 19:20:05

Message: 10 of 17

Luna Moon <lunamoonmoon@gmail.com> wrote in message <f03d83a5-9b33-4b0a-95ab-5e962650ebee@v16g2000vba.googlegroups.com>...
> On Apr 3, 8:29 pm, "Mark Shore" <msh...@magmageosciences.ca> wrote:
> > Luna Moon <lunamoonm...@gmail.com> wrote in message <205a603e-cc38-4088-8d39-5d5b8464a...@d34g2000vbl.googlegroups.com>...
> > > Hi all,
> >
> > > I have a vector of real numbers in Matlab. How do I compress them?  Of
> > > course this has to be lossless, since I need to be able to recover
> > > them.
> >
> > > The goal is to study the Shannon rate and entropy of these real
> > > numbers, so I decide to compress them and see how much compression
> > > ratio I can have.
> >
> > > I don't need to write the result into compressed files, so those
> > > headers, etc. are just overhead for me which affect me calculating the
> > > Entropy... so I just need a bare version of the compress ratio...
> >
> > > Any pointers?
> >
> > > Thanks a lot!
> >
> > An exceeding simple test involving little or no effort on your part would be to take representative binary files and compress them with off-the-shelf utilities such as WinZip or 7-Zip.
> >
> > This would certainly give you some idea of what level of lossless compression you can expect from reasonably well-tested and mature algorithms before you try to adapt your own.
>
>
> Thanks a lot folks.
>
> Please remember the goal is not to compress the floating numbers per
> se. It's actually to measure the entropy of the data.
>
> I don't really care how much compression it can maximally achieve.
>
> Using WinZip is a great idea, however, I am looking for
>
> (1) a command inside Matlab;
> (2) a bare-bone compression, without the header info, etc. in Winzip,
> because those are overheads in terms of measuring entropy...
>
> Any more thoughts?
>
> Thank you!

I'm not aware off the top of my head what built-in commands or third-party tools might be available in MATLAB. You did make your overall goal clear in your first posts, so I was suggesting file compression utilities as an indirect measure of the entropy of a given data set.

This can work if the data set is large enough. For example, as a test I just compressed a binary 1591200x15 matrix of double-precision values representing a time series of 24-bit measurements from an array of magnetometers. WinZip compresses the original 190,944,400 byte file to 34,162,706 bytes using its maximum compression setting. An equal size binary array filled with pseudorandom numbers compressed to 179,929,479 bytes using the same setting. This difference seems reasonable given the higher entropy of the random set.

If you are dealing with very small files, then agreed, any file compression/decompression header overhead would likely make this less useful.

Subject: How do I compress an array of floating numbers in Matlab?

From: robert bristow-johnson

Date: 5 Apr, 2010 00:23:22

Message: 11 of 17

On Apr 4, 1:58 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
...
> Continuing, the output of a linear-congruential random number
> generator is also easy to predict if you know the constants of
> the generator.

yeah, i guess you need a couple of constants and the initial seed
value. but don't you also need to somehow encode the rng algorithm,
too?

>  If you don't, and you have a big enough sample,
> then you can likely find the pattern.  (If you have the bits
> exactly, though I am not sure how long it would take.)
>
> If you have, say, sin() of the linear-congruential number
> stream then it is likely much more difficult.  

it will look different in a histogram. suppose the rng was scaled to
be uniformly distributed over a segment as long as any multiple of
2pi, then the p.d.f. would go up as it approaches +1 or -1.

r b-j

Subject: How do I compress an array of floating numbers in Matlab?

From: Glen Herrmannsfeldt

Date: 5 Apr, 2010 04:39:43

Message: 12 of 17

In comp.dsp robert bristow-johnson <rbj@audioimagination.com> wrote:
> On Apr 4, 1:58?pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:

>> Continuing, the output of a linear-congruential random number
>> generator is also easy to predict if you know the constants of
>> the generator.
 
> yeah, i guess you need a couple of constants and the initial seed
> value. but don't you also need to somehow encode the rng algorithm,
> too?

Well, linear congruential pretty much means multiply by
a constant, add a constant (possibly zero) and modulo a constant.
I am not actually sure how long it takes, given a sufficiently
long sample of the output, to find the constants.
 
>> ?If you don't, and you have a big enough sample,
>> then you can likely find the pattern. ?(If you have the bits
>> exactly, though I am not sure how long it would take.)

>> If you have, say, sin() of the linear-congruential number
>> stream then it is likely much more difficult. ?
 
> it will look different in a histogram. suppose the rng was scaled to
> be uniformly distributed over a segment as long as any multiple of
> 2pi, then the p.d.f. would go up as it approaches +1 or -1.

Yes you could do that. But assuming that you have the ability
to find the constants for an LCG from the output, it is much
harder if you don't have all the bits of the generator output.
If, for example, you have the single precision sine then you
likely don't have enough bits after taking the arcsine.

-- glen

Subject: How do I compress an array of floating numbers in Matlab?

From: robert bristow-johnson

Date: 5 Apr, 2010 05:51:29

Message: 13 of 17

On Apr 5, 12:39 am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> In comp.dsp robert bristow-johnson <r...@audioimagination.com> wrote:
>
> > On Apr 4, 1:58?pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> >> Continuing, the output of a linear-congruential random number
> >> generator is also easy to predict if you know the constants of
> >> the generator.
> > yeah, i guess you need a couple of constants and the initial seed
> > value.  but don't you also need to somehow encode the rng algorithm,
> > too?
>
> Well, linear congruential pretty much means multiply by
> a constant, add a constant (possibly zero) and modulo a constant.

right. maybe it's a dumb point, but there are other pseudo-r.n.g.
algs (that don't necessarily produce good r.n.), and there are
zillions of different permutations. it just seems to me that a
complete encoding might include information for how the r.n.g. alg
works, besides any seed numbers. (sorta like a code book, what Luna
doesn't want to see in a header, but what i think sorta belongs.)

> I am not actually sure how long it takes, given a sufficiently
> long sample of the output, to find the constants.
>

i might think it would be a bitch. especially with a weird modulo.

> >> ?If you don't, and you have a big enough sample,
> >> then you can likely find the pattern. ?(If you have the bits
> >> exactly, though I am not sure how long it would take.)
> >> If you have, say, sin() of the linear-congruential number
> >> stream then it is likely much more difficult. ?
> > it will look different in a histogram.  suppose the rng was scaled to
> > be uniformly distributed over a segment as long as any multiple of
> > 2pi, then the p.d.f. would go up as it approaches +1 or -1.
>
> Yes you could do that.  But assuming that you have the ability
> to find the constants for an LCG from the output,

are you assuming that?

> it is much
> harder if you don't have all the bits of the generator output.

i just think it would be very hard, in nearly any case.

> If, for example, you have the single precision sine then you
> likely don't have enough bits after taking the arcsine.  

with round-off, is this sine mapping one-to-one? if not, then the
arcsine won't be able to undo to it losslessly.

r b-j

Subject: How do I compress an array of floating numbers in Matlab?

From: Jan Simon

Date: 5 Apr, 2010 21:29:04

Message: 14 of 17

Dear Luna!

> I have a vector of real numbers in Matlab. How do I compress them? Of
> course this has to be lossless, since I need to be able to recover
> them.
>
> The goal is to study the Shannon rate and entropy of these real
> numbers, so I decide to compress them and see how much compression
> ratio I can have.
>
> I don't need to write the result into compressed files, so those
> headers, etc. are just overhead for me which affect me calculating the
> Entropy... so I just need a bare version of the compress ratio...

Michael Kleder's function compresses data in the memory with the zlib:
  http://www.mathworks.com/matlabcentral/fileexchange/8899

E.g. for sin(1:1e5) this saves 5% memory. 7-zip reduces the file by at least 25%.

Good luck, Jan

Subject: How do I compress an array of floating numbers in Matlab?

From: TideMan

Date: 5 Apr, 2010 23:32:43

Message: 15 of 17

On Apr 6, 9:29 am, "Jan Simon" <matlab.THIS_Y...@nMINUSsimon.de>
wrote:
> Dear Luna!
>
> > I have a vector of real numbers in Matlab. How do I compress them?  Of
> > course this has to be lossless, since I need to be able to recover
> > them.
>
> > The goal is to study the Shannon rate and entropy of these real
> > numbers, so I decide to compress them and see how much compression
> > ratio I can have.
>
> > I don't need to write the result into compressed files, so those
> > headers, etc. are just overhead for me which affect me calculating the
> > Entropy... so I just need a bare version of the compress ratio...
>
> Michael Kleder's function compresses data in the memory with the zlib:
>  http://www.mathworks.com/matlabcentral/fileexchange/8899
>
> E.g. for sin(1:1e5) this saves 5% memory. 7-zip reduces the file by at least 25%.
>
> Good luck, Jan

An entirely different approach is "wavelet shrinkage".
Google it.
It's easy to do in Matlab if you have the wavelet toolbox.
I use the techniques for denoising and despiking, but I've never tried
to compress data with them.
I use Shannon entropy to figure out the optimum mother wavelet.

Subject: How do I compress an array of floating numbers in Matlab?

From: Luna Moon

Date: 6 Apr, 2010 17:54:30

Message: 16 of 17

On Apr 5, 5:29 pm, "Jan Simon" <matlab.THIS_Y...@nMINUSsimon.de>
wrote:
> Dear Luna!
>
> > I have a vector of real numbers in Matlab. How do I compress them?  Of
> > course this has to be lossless, since I need to be able to recover
> > them.
>
> > The goal is to study the Shannon rate and entropy of these real
> > numbers, so I decide to compress them and see how much compression
> > ratio I can have.
>
> > I don't need to write the result into compressed files, so those
> > headers, etc. are just overhead for me which affect me calculating the
> > Entropy... so I just need a bare version of the compress ratio...
>
> Michael Kleder's function compresses data in the memory with the zlib:
>  http://www.mathworks.com/matlabcentral/fileexchange/8899
>
> E.g. for sin(1:1e5) this saves 5% memory. 7-zip reduces the file by at least 25%.
>
> Good luck, Jan

So how is this approach: I first write the floating numbers to a TEXT
file, and then call Winzip or 7Zip from within Matlab and then measure
the file size change before and after the compression, and then
compute the ratio.

Subject: How do I compress an array of floating numbers in Matlab?

From: Luna Moon

Date: 6 Apr, 2010 17:55:04

Message: 17 of 17

On Apr 5, 7:32 pm, TideMan <mul...@gmail.com> wrote:
> On Apr 6, 9:29 am, "Jan Simon" <matlab.THIS_Y...@nMINUSsimon.de>
> wrote:
>
>
>
> > Dear Luna!
>
> > > I have a vector of real numbers in Matlab. How do I compress them?  Of
> > > course this has to be lossless, since I need to be able to recover
> > > them.
>
> > > The goal is to study the Shannon rate and entropy of these real
> > > numbers, so I decide to compress them and see how much compression
> > > ratio I can have.
>
> > > I don't need to write the result into compressed files, so those
> > > headers, etc. are just overhead for me which affect me calculating the
> > > Entropy... so I just need a bare version of the compress ratio...
>
> > Michael Kleder's function compresses data in the memory with the zlib:
> >  http://www.mathworks.com/matlabcentral/fileexchange/8899
>
> > E.g. for sin(1:1e5) this saves 5% memory. 7-zip reduces the file by at least 25%.
>
> > Good luck, Jan
>
> An entirely different approach is "wavelet shrinkage".
> Google it.
> It's easy to do in Matlab if you have the wavelet toolbox.
> I use the techniques for denoising and despiking, but I've never tried
> to compress data with them.
> I use Shannon entropy to figure out the optimum mother wavelet.

Sounds good. I guess the question is how to decide the Shannon entropy
for a sequence of floating numbers?

Tags for this Thread

No tags are associated with this thread.

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us