Quantcast

Documentation Center

  • Trial Software
  • Product Updates

decimalToBinaryVector

Convert decimal value to binary vector

Syntax

  • decimalToBinaryVector(decimalNumber) example
  • decimalToBinaryVector(decimalNumber,numberOfBits) example
  • decimalToBinaryVector(decimalNumber,numberOfBits,bitOrder) example
  • decimalToBinaryVector(decimalNumber,[],bitOrder)

Description

example

decimalToBinaryVector(decimalNumber) converts a positive decimal number to a binary vector, represented using the minimum number of bits.

example

decimalToBinaryVector(decimalNumber,numberOfBits) converts a decimal number to a binary vector with the specified number of bits.

example

decimalToBinaryVector(decimalNumber,numberOfBits,bitOrder) converts a decimal number to a binary vector with the specified number of bits in the specified bit ordering.

decimalToBinaryVector(decimalNumber,[],bitOrder) converts a decimal number to a binary vector with default number of bits in the specified bit ordering.

Examples

expand all

Convert a Decimal to a Binary Vector

decimalToBinaryVector(6)
ans =

     1     1     0

Convert an Array of Decimals to a Binary Vector Array

decimalToBinaryVector(0:4)
ans =

     0     0     0
     0     0     1
     0     1     0
     0     1     1
     1     0     0

Convert a Decimal into a Binary Vector of Specific Bits

decimalToBinaryVector(6, 8, 'MSBFirst')
ans =

     0     0     0     0     0     1     1     0

Convert a Decimal into a Binary Vector with LSB First

decimalToBinaryVector(6, [], 'LSBFirst')
ans =

     0     1     1

Convert an Array of Decimals into a Binary Vector Array with LSB First

decimalToBinaryVector(0:4, 4, 'LSBFirst')
ans =

     0     0     0     0
     1     0     0     0
     0     1     0     0
     1     1     0     0
     0     0     1     0

Input Arguments

expand all

decimalNumber — Number to convert to binary vectornumeric

The number to convert to a binary vector specified as a positive integer scalar.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

numberOfBits — Number of bits required to correctly represent the decimal numbernumeric

The number of bits required to correctly represent the decimal. This is an optional argument. If you do not specify the number of bits, the number is represented using the minimum number of bits needed. By default minimum number of bits needed to represent the value is specified, unless you specify a value

bitOrder — Bit order for binary vector representationMSBFirst (default) | LSBFirst

Bit order for the binary vector representation specified as:

  • MSBFirst if you want the first element of the output to contain the most significant bit of the decimal number.

  • LSBFirst if you want the first element of the output to contain the least significant bit of the decimal number.

See Also

Functions

Was this topic helpful?