The package comprises of a graphical utility to place uniform B-spline control points and see how the B-spline is redrawn as control points or control point weights are adjusted, and functions to estimate B-splines with known knot vector, given a set of noisy data points either with known or unknown associated parameter values.As regards the interactive interface, the user is shown a figure window with axes in which to choose control points of a uniform B-spline. As points are placed in the axes, the B-spline of specified order is drawn progressively. The user may terminate adding control points by pressing ENTER or ESC, or may place the last control point with a right mouse button click.Once done, control points may be adjusted with drag-and-drop. Hold down the left mouse button over any control point and drag it to another location. Control point adjustment works in 3D; use the rotation tool to set a different camera position. It is also possible to explicitly set the x, y and z coordinates as well as the weight of a control point: click on the point, enter new values and hit ENTER.As regards the non-interactive interface, functions include calculating and drawing basis functions, computing points of a (weighted or unweighted) B-spline curve with de Boor's algorithm, and estimating B-spline control points given noisy data, either with or without parameter values associated with the observed data points.From a programmers' perspective, this example illustrates how to use nested functions to extend variable scope, implement drag-and-drop operations, combine normalized and pixel units for control docking and register multiple callbacks for a single event in an interactive user interface.USAGEThe simplest way to get started is to run "bspline_gui", which activates the figure window to place B-spline control points interactively. Examples are bundled to illustrate various B-spline curve computation and approximation methods.

Computes the B-spline approximation from a set of coordinates (knots).The number of points per interval (default: 10) and the order of the B-spline (default: 3) can be changed. Periodic boundaries can be used.It works on any dimension (even larger than 3...).This code is inspired from that of Stefan Hueeber and Jonas Ballani [1].Example (see image):%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%rng(1) % Set random seed for reproductility XY=rand(5,2); % Random set of 5 points in 2D BS2=BSpline(XY); % Default order=3 (quadratic) BS3=BSpline(XY,'order',4); % order=4 -> cubic B-splineBSper=BSpline(XY,'periodic',true);h=plot(XY(:,1),XY(:,2),'-o',BS2(:,1),BS2(:,2),BS3(:,1),BS3(:,2),'--',BSper(:,1),BSper(:,2),'-.'); legend('Control polygon','Quadratic','Cubic','Quadratic periodic')set(h, 'LineWidth',2)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%[1] http://m2matlabdb.ma.tum.de/download.jsp?MC_ID=7&SC_ID=7&MP_ID=485

A fast surface reconstruction is implemented in this set of codes. Given a 3D cloud of points accompanied by normal vectors an implicit b-spline surface will be reconstructed.Please cite the following paper, in case of using the code:Rouhani M. and Sappa A.D., Implicit B-spline fitting using the 3L algorithm, IEEE Conference on on Image Processing (ICIP'11), 2011.Mohammad Rouhani, Angel Domingo Sappa: Implicit B-spline fitting using the 3L algorithm. ICIP 2011: 893-896

The program splineLength.m calculates numerically the arc length of an arbitrary B-spline. Numerical integration uses "waypoints" for high precision.

Using Implicit B-Splines for Surface Reconstruction out of 3D point clouds.Please cite the following paper, in case of using the code:Rouhani M. and Sappa A.D., Implicit B-spline fitting using the 3L algorithm, IEEE Conference on on Image Processing (ICIP'11), 2011.

This Spline toolbox provides the possibility to define spline curves and surfaces according to the common definition with knot vectors, the order of the B-spline basis functions and their coefficients.The Spline objects can be evaluated, differentiated and visualized in multiple fashions, e.g. fast evaluation at grid points for surfaces or the calculation of all sorts of curvatures. As the evaluation is based on the application of Horner's scheme on the different polynomial segments, it is even possible to extend the functions continuously and evaluate at points that are not contained in the domain.Furthermore, this toolbox allows the calculation of spline approximants for given pairs of function values, in both cases of curves and surfaces. They can be extended by considering additional interpolation and smoothing conditions.A detailed description of all functionalities, combined with several examples, can be found in the documentation.This toolbox was only created for the application in a research project, so errors are only handled properly during the definition of a spline. Any other error is often originated in mismatching dimensions, e.g. by manual manipulation of properties, or not correctly defined knots vectors/coefficients. But feel free to contact me in any case of problem as I am always trying to improve the program.

A numerical experiment described in http://dx.doi.org/10.1109/TIE.2007.909064 [*] is reproduced here. In [*] the forgetting mechanism is employed to robustify the control scheme. By contract, in http://dx.doi.org/10.1109/IECON.2013.6700120 [**] weight constraints are used instead of forgetting and that turns out to robustify the controller. Hence, the same idea has been tested also in the B-spline based repetitive neurocontroller proposed in [*]. To be clear, I haven't invented the controller introduced in this model. I've just modified the robustification mechanism used in [*]. You can play here with both mechanisms and decide for yourself which one of them is more suitable for your application. You can even combine them. It should be noted that [**] uses a global update rule and not necessarily the same robustification mechanisms are equally effective in both controllers. For more information please see m-files and our conference paper: M. Malkowski, B. Ufnalski and L. M. Grzesiak, B-spline based repetitive controller revisited: error shift, higher-order polynomials and smooth pass-to-pass transition, ICSTCC 2015, http://ufnalski.edu.pl/proceedings/icstcc2015/ .

2D Digital Image Correlation Matlab Software. The most up to date code now uses my github: justinblaber/ncorr_2D_matlab

A MATLAB toolbox 'bsspdfest' implementing nonparametric probability function estimation using normalized B-splines was developed. The toolbox implements nonparametric probability function estimation procedures for one or more dimensions using a B-spline series for one-dimensional data and a tensor product B-spline series for multi-dimensional data. The toolbox takes advantage of the direct addressing of MATLAB arrays up to three dimensions and various vectorization approaches to speed up the computations. For data dimensions greater than three indirect addressing is used, converting multi-dimensional indices into linear array addressing, making this function slower.The toolbox supports the computation of the PDF, CDF, and survivor functions for data of all dimensions as well as the inverse CDF (ICDF) and cumulative hazard functions for one dimensional data. The toolbox also supports the creation and use of gridded interpolants to provide very fast approximate evaluation of the B-spline series or tensor product series for the probability functions. Bounded domains are also now supported for all dimensions.Version 2.3.1 of the bsspdfest toolbox has just been released! This version now uses reflection for active boundaries on bounded or semi-infinite domains and also supports bounded domains for data of all dimensions. A variety of performance improvements have also been made.

fastBSpline - A fast, lightweight class that implements non-uniform B splines of any order Matlab's spline functions are very general. This generality comes at the price of speed. For large-scale applications, including model fitting where some components of the model are defined in terms of splines, such as generalized additive models, a faster solution is desirable. The fastBSpline class implements a lightweight set of B-spline features, including evaluation, differentiation, and parameter fitting. The hard work is done by C code, resulting in up to 10x acceleration for evaluating splines and up to 50x acceleration when evaluating of spline derivatives. Nevertheless, fastBSplines are manipulated using an intuitive, high- level object-oriented interface, thus allowing C-level performance without the messiness. Use CompileMexFiles to compile the required files. If mex files are not available, evaluation will be done in .m code, so you may still use the code if you can't use a compiler for your platform. B splines are defined in terms of basis functions: y(x) = sum_i B_i(x,knots)*weights_i B (the basis) is defined in terms of knots, a non-decreasing sequence of values. Each basis function is a piecewise polynomial of order length(knots)-length(weights)-1. The most commonly used B-spline is the cubic B-spline. In that case there are 4 more knots than there are weights. Another commonly used B-spline is the linear B-spline, whose basis function are shaped like tents, and whose application results in piecewise linear interpolation. The class offers two static functions to fit the weights of a spline: lsqspline and pspline. It includes facilities for computing the basis B and the derivatives of the spline at all points. Constructor: sp = fastBSpline(knots,weights); Example use: %Fit a noisy measurement with a smoothness-penalized spline (p-spline) x = (0:.5:10)'; y = sin(x*pi*.41-.9)+randn(size(x))*.2; knots = [0,0,0,0:.5:10,10,10,10]; %Notice there are as many knots as observations %Because there are so many knots, this is an exact interpolant sp1 = fastBSpline.lsqspline(knots,3,x,y); %Fit penalized on the smoothness of the spline sp2 = fastBSpline.pspline(knots,3,x,y,.7); clf; rg = -2:.005:12; plot(x,y,'o',rg,sp1.evalAt(rg),rg,sp2.evalAt(rg)); legend('measured','interpolant','smoothed'); fastBSpline properties: outOfRange - Determines how the spline is extrapolated outside the range of the knots knots - The knots of the spline (read only) weights - The weights of the spline (read only) fastBSpline Methods: fastBSpline - Construct a B spline from weights at the knots lsqspline - Construct a least-squares spline from noisy measurements pspline - Construct a smoothness-penalized spline from noisy measurements evalAt - Evaluate a spline at the given points getBasis - Get the values of the underlying basis at the given points Btimesy - Evaluate the product getBasis(x)'*y dx - Returns another fastBSpline object which computes the derivative of the original spline Disclaimer: fastBSpline is not meant to replace Matlab's spline functions; it does not include any code from the Mathworks

B-splines is a natural signal representation for continous signals, wheremany continous-domain operations can be carried out exactly once theB-spline approximation has been done.The B-spline estimation procedure in this toolbox using allpole filtersis based on the classic papers by M. Unser and others [1,2,3], it allowsvery fast estimation of B-spline coefficients when the sampling grid isuniform. Evaluation/interpolation is also a linear filter operation.The toolbox has two layers; a set of functions for the fundamentaloperations on polynomial B-splines, and an object-oriented wrapper whichkeeps track of the properties of a spline signal and overload commonoperators.The representation is dimensionality-independent, and much of the code isvectorized.Units tests are included, these require the MATLAB xunit toolbox.[1] M. Unser, A. Aldroubi, M. Eden, "B-Spline Signal Processing: PartI-Theory", IEEE Transactions on Signal Processing, vol. 41, no. 2, pp.821-833, February 1993[2] M. Unser, A. Aldroubi, M. Eden, "B-Spline Signal Processing: PartII-Efficient Design and Applications", IEEE Transactions on SignalProcessing, vol. 41, no. 2, pp. 834-848, February 1993[3] M.Unser, "Splines: A Perfect Fit for Signal and Image Processing",IEEE Signal Processing Magazine, vol. 16, no. 6, pp. 22-38, 1999

In this code, the image is defined using B-spline level set functions and they are deformed by using a composition approach. The computation composed of efficient algorithms for calculating the B-spline coefficients and gradients of the images by using B-spline filters.

Given the number of control points(N), the order of Splines (K), a sequence of knot vector (T), and the file name of txt format, the function basisfunc_NBS computes the nonrational (unweighted) basis functions N_ik(t) for each segment up to the Kth order, and writes the data to the txt file.Add an input variable w for weights, the function basisfunc_NURBS computes the rational (weighted) basis functions R_ik(t) for each segment at the Kth order.

This model employs the idea presented in http://dx.doi.org/10.1109/TNN.2004.824268 . Some modifications described in http://www.mathworks.com/matlabcentral/fileexchange/49023-b-spline-based-repetitive-neurocontroller are implemented. A very concise C-code (yet still readable) developed by Michal Malkowski for http://www.mathworks.com/matlabcentral/fileexchange/49077-b-spline-network-based-repetitive-controller--c-code- is used. The plant is identical as in http://www.mathworks.com/matlabcentral/fileexchange/48791-iterative-learning-motion-control . This solution comes with some drawbacks. I encourage you to identify their roots on your own -- and fix them :). The quadratic spline is more smooth than the linear one but is it better? When testing electric drives always examine the shape of the current(s). And remember to click the Build button in the S-Function block before attempting to run the model. More info: M. Malkowski, B. Ufnalski and L. M. Grzesiak, B-spline based repetitive controller revisited: error shift, higher-order polynomials and smooth pass-to-pass transition, 19th International Conference on System Theory, Control and Computing (ICSTCC), 2015, http://ufnalski.edu.pl/proceedings/icstcc2015/ .

This script provides an easy representation of the Nonuniform Rational B-Splines

This model merges ideas presented in https://www.mathworks.com/matlabcentral/fileexchange/49023-b-spline-based-repetitive-neurocontroller and https://www.mathworks.com/matlabcentral/fileexchange/47847-plug-in-direct-particle-swarm-repetitive-controller. The novelty is that B-spline based repetitive controller has weights trained using PSO.

a function to compute the b-spline points on a gridusage y = spline_recursion (u,n)n is the order of the spline u is the grid pointexample:t=linspace(-2,10,10000);y1=spline_recursion (t,2);y2=spline_recursion (t,3);y3=spline_recursion (t,4);y4=spline_recursion (t,10);subplot(2,2,1), plot(t,y1), title('b-spline order = 2');subplot(2,2,2), plot(t,y2), title('b-spline order = 3');subplot(2,2,3), plot(t,y3), title('b-spline order = 4');subplot(2,2,4), plot(t,y4), title('b-spline order = 10');

Affine and B-spline grid based registration and data-fitting of two 2D color/grayscale images or 3D volumes or point-data. Registration can be done intensity / pixel based, or landmark / corresponding points based (See OpenSurf), or a combination.Description Pixel-based registration:This function is an (enhanced) implementation of the b-spline registration algorithm in D. Rueckert et al. "Nonrigid Registration Using Free-Form Deformations: Application to Breast MR Images". Including the smoothness penalty of Rueckert (thin sheet of metal bending energy), and Jacobian (diffeomorphic) function. Also including, localized normalized mutual information as registration error, allowing the images or volumes to be of a different type/modality for instance a MRI T1 and T2 patient scan.How it works:A grid of b-spline control points is constructed which controls the transformation of an input image. An error measure is used to measure the registration error between the moving and static image. The quasi newton Matlab optimizer fminlbfgs (also on Mathworks) is used to move the control points to achieve the optimal registration between both images with minimal registration error.Usage:- The function image_registration.m is easy to use, and contains examples in the help, and will fit most applications. (If you want to write your own specialized registration code study the registration examples)- The function point_registration is fast fitting of a b-spline grid to 2D/3D corresponding points, for landmark based registration.- There is also the function manually_warp_images which allow control grid changes with the mouse, to get better registration.First, you need to compile mex / C code with compile_c_files.m. (2D registration also works without mex files but will be slower)The multi-threaded mex code supports Windows, Linux (and Mac OS?)Some Features:- 2-D / 3-D Eulerian strain tensor images can be made from the transformations fields. For example to describe cardiac motion in the images.- Landmarks can be used for already known corresponding points (for example from Sift). Influence of every landmark on the registration process can be adjust. - It is possible to register a number of movie frames by using the registration grid of previous two images as initial registration grid of the next two images.- It is possible to mask parts of the images, to decrease or increase the influence of an image structure on the registration result.Literature:- D. Rueckert et al. "Nonrigid Registration Using Free-Form Deformations: Application to Breast MR Images". - Seungyong Lee, George Wolberg, and Sung Yong Shing, "Scattered Data interpolation with Multilevel B-splines"note:- B-spline registration is slower and more complex than demon registration see:http://www.mathworks.fr/matlabcentral/fileexchange/loadFile.do?objectId=21451 - Why still use B-spine registration?, because the resulting transformation-field corresponds better to real-live deformation than transformation fields from fluid registration.Please report bugs, successes and questions.

Coefficients of the Cubics For Nonuniform Cubic Spline InterpolationThe program works for any combination of first or second derivative end conditions (so, as special cases, it includes natural and clamped cubic splines)If you want to evaluate the spline, use splineA.m

This model is a C-code version of http://www.mathworks.com/matlabcentral/fileexchange/49023-b-spline-based-repetitive-neurocontroller uploaded by Bartlomiej Ufnalski.

A little piece of code enabling quick modification of spline objects: clipping, shifting, and scaling in both x, and y.

This is a function to draw a closed cubic B-Spline, based on by David Salomon (great book!), page 261 (closed cubic B-Spline curve).usage:closed_cubic_bspline(P,1) will compute and plot the closed B-Spline.closed_cubic_bspline(P) will only compute the interpolated points.notes:In my program, I used a step of 1/100; if you need higher density, you could modify the value of nj (set to 100) on line 35.

Direct spline interpolation of noisy data may result in a curve with unwanted oscillations. This is particularly bad if the slope of the curve is important. A better approach is to reduce the degrees of freedom for the spline and use the method of least squares to fit the spline to the noisy data. The deegres of freedom are connected to the number of breaks (knots), so the smoothing effect is controlled by the selection of breaks. SPLINEFIT:- A curve fitting tool based on B-splines- Splines on ppform (piecewise polynomial)- Any spline order (cubic splines by default)- Periodic boundary conditions- Linear constraints on function values and derivatives- Robust fitting scheme- Operates on ND arrays in the same way as SPLINE- Nonuniform distributions of breaksM-FILES ALSO INCLUDED:examples - Examples for splinefitppdiff - Differentiate piecewise polynomialppint - Integrate piecewise polynomial

Simple and fast 2D Interpolation App with graphical user interface (GUI)!Matlab Apps for Two Dimensionals Interpolation functions from a scattered point cloud dataset (XYZ), i.e., Linear, Nearest neighbor, Natural neighbor, Cubic, and Spline interpolation.Read and export data sets in ASCII cloud format (*.txt,*.asc,*.xyz,*.pts,*.csv).Tuneable output grid interval in meter (in a uniform grid).Option to make extrapolation.Created with App Designer and Matlab 2019b libraries.(Contact and support: yorda.utama@gmail.com)

Matlab Apps for Two-Dimensional Interpolation functions from scattered points (xyz), i.e., Linear, Nearest neighbor, Natural neighbor, Cubic, and Spline interpolation.Read and export data sets in ASCII cloud format (.txt,.asc,.xyz,.pts,*.csv).Tuneable output grid interval in meter (in a uniform grid).Option to make extrapolation.Created with App Designer and Matlab 2019b libraries.Version 1.0 notes:Read input point cloud (xyz) in ASCII cloud format (.txt,.asc,.xyz,.pts,*.csv)Option to add translation/scaling into original data to improve computational efficiencyPlot original dataPerform 2D interpolation (xyz) in a tuneable output grid interval (in meters)Selection to choose 2D interpolation methods, i.e., Linear, Nearest neighbor, Natural neighbor, Cubic, and Spline interpolationOption to make extrapolationExport output point cloud (xyz) in ASCII cloud format (.txt,.asc,.xyz,.pts,*.csv)Version 1.1 notes:Resolved bugsAdd 2D Plot to preview data or resultAdd 3D Plot to preview result

Kusakin D. V., Porshnev D. V., Safiullin N. T. Methods for reconstruction of discrete-time signals: fundamentals of the theory, software tools, and analysis accuracy.Lists of software implementations and an analysis of the accuracy of methods for solving the problem of reconstruction (interpolation) of discrete-time signals (real deterministic function) from uniform samples . Project including: global interpolation methods (polynomial intrepolation, trigonometric interpolation,Whittaker-Kotelnikov interpolation formula); local interpolation methods (linear, quadaratic, spline interpolation, B-spline interpolation).

Line-Profile Analysis Software (LIPRAS), is a graphical user interface for least squares fitting of Bragg peaks in diffraction data. For any region of the inputted data, user can choose which profile functions to apply to the fit, constrain profile functions, and view the resulting fit in terms of the profile functions chosen. A Bayesian inference analysis can be carried out on the resulting least squares result(s) to generate a full description of the errors for all profile parameters. Authors: Giovanni Esteves, Klarissa Ramos, Chris Fancher, and Jacob Jones• Quickly extract relevant peak information: position, full width at half maximum (FWHM), and intensity • Conduct Bayesian inference on least squares results using a Markov Chain Monte Carlo algorithm (need Statistics and Machine Learning Toolbox)• Analyzes files with a different number of data points and/or X-values. However, check fitting range before attempting.• Customize the background fit by either treating it separately (Polynomial or Spline) or including it in the least squares routine (Polynomial only) • Choose from 5 different peak-shape functions: Gaussian, Lorentzian, Pseudo-Voigt, Pearson VII, and Asymmetric Pearson VII.• Peak-shape functions can be constrained in terms of intensity, peak position, FWHM, and mixing coefficient • Automatically calculate Cu-Kalpha2 peaks when working with laboratory X-ray data • Fit up to 20 peaks in the current profile region • For multiple diffraction patterns, results from the previous fit can be used as the subsequent starting parameters for next fit • Visualize results with a plot of the resulting peak fit and residual plot, allowing you to see what peaks make up the overall fit• Resulting coefficients values can be viewed with file number to quickly view trends in data• Parameters files can be written and used to recreate fits and details what fit parameters and profile shape functions were used • Accepts the following file types: .xy, [.ras, .acs] (Rigaku), .xls, .xlsx, .fxye, .xrdml (Panalytical), .chi, .csv (Windows Only) LIPRAS is currently updated through GitHub Web Page: https://github.com/SneakySnail/LIPRAS Requires MATLAB 2016b, Curve Fitting Toolbox, and GUI Layout Toolbox to run. The Statistics and Machine Learning Toolbox is required for Bayesian analysis.If you use LIPRAS for your research, please cite it (choose one):1. Giovanni Esteves, Klarissa Ramos, Chris M. Fancher, and Jacob L. Jones. LIPRAS: Line-Profile Analysis Software. (2017). DOI: 10.13140/RG.2.2.29970.25282/3 2. Giovanni Esteves, Klarissa Ramos, Chris M. Fancher, and Jacob L. Jones. LIPRAS: Line-Profile Analysis Software. (2017). https://github.com/SneakySnail/LIPRAS

traj_gen is a continuous trajectory generation package where high order derivatives along the trajectory are minimized while satisfying waypoints (equality) and axis-parallel box constraint (inequality). The objective and constraints are formulated in quadratic programming (QP) to cater the real-time performance. To parameterize a trajectory, we use two types of curve: 1) piecewise-polynomials [1,2] and 2) a sequence of points [3]. The difference is optimization variables.a. Piecewise-polynomials (polyTrajGen class) : It defines the primitive of the curve as polynomical spline. The optimization target is either polynomial coefficients [1] or free end-derivatives of spline segments [2] (can be set in constructor). In general, the latter has fewer optimization variables as it reduces the number of variable as much as the number of equality constraints.b. A sequence of points (optimTrajGen class) : It does not limit the primitive of the curve. The optimization target is a finite set of points. The final curve is defined as a linear interpolant of the set of points. The point density (# of points per time) should be set in the constructor. Instead of unlimited representation capability of a curve, the size of optimization is driectly affected by the point density.In this package, we use pin to accommodate the two constraints: equality (fix pin) and inequality (loose pin). Pin can be imposed regardless of the order of derivatives. Fix-pin refers a waypoint constraint, and loose-pin denotes a axis-parallel box constraint. The pin is a triplets (time (t), order of derivatives (d), value (x)) where x is a vector in case of fix-pin while two vectors [xl xu] for the loose-pin.[1] Mellinger, Daniel, and Vijay Kumar. "Minimum snap trajectory generation and control for quadrotors." 2011 IEEE International Conference on Robotics and Automation. IEEE, 2011.[2] Richter, Charles, Adam Bry, and Nicholas Roy. "Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments." Robotics Research. Springer, Cham, 2016. 649-666.[3] Ratliff, Nathan, et al. "CHOMP: Gradient optimization techniques for efficient motion planning." 2009 IEEE International Conference on Robotics and Automation. IEEE, 2009.

This GUI visualizes the basis functions of spline spaces. Different bases can be chosen from the following: 1) B-Splines 2) Cardinal Splines

Class to enable BSpline signal and image processing. Based off of the papers:M. Unser, A. Aldroubi, and M. Eden, "B-Spline Signal Processing: Part I - Theory," IEEE Trans Sig Proc, 41(2):821-833, Feb 1993.M. Unser, A. Aldroubi, and M. Eden, "B-Spline Signal Processing: Part II - Efficient Design and Applications," IEEE Trans Sig Proc, 41(2):834-848, Feb 1993.The class constructor, bsarray.m, takes as input a n-dimensional array, and computes B-spline coefficients for interpolating or smoothing splines of any order less than or equal to 7.Other member functions enable various computations/manipulations:indirectFilter.m: reconstructs a signal from BSpline coefficients stored in a bsarray objectpartial.m: analytically computes the partial derivative, returning a bsarray object of one less degree in the desired dimensioninterp1.m, interp2.m, interp3.m: overloaded versions of interp1, interp2, and interp3, that operate on bsarray objects to interpolate the original data.See help on each of these functions for instructions on how to call them.

Sometimes your 1D data has gaps. This may be due to a faulty sensor or irregular sampling intervals. When this is the case, you may want to interpolate over short gaps in your data, but where no data exist for long periods of time, it's inappropriate to interpolate. This function performs interpolation over small gaps in 1D data.Syntaxvq = interp1gap(v)vq = interp1gap(x,v,xq)vq = interp1gap(...,maxgapval)vq = interp1gap(...,'method')vq = interp1gap(...,'interpval',vval)vq = interp1gap(...,'extrap',extrapval)Descriptionvq = interp1gap(v) linearly interpolates to give undefined (NaN) values of v.vq = interp1gap(x,v,xq) interpolates to find vq, the values of the underlying function v at the points in the vector or array xq.vq = interp1gap(...,maxgapval) specifies a maximum gap in the independent variable over which to interpolate. If x and xq are given, units of maxgapval match the units of x. If x and xq are not provided, units of maxgapval are indices of v, assuming any gaps in v are represented by NaN. If maxgapval is not declared, interp1gap will interpolate over infitely-large gaps.vq = interp1gap(...,'method') specifies a method of interpolation. Default method is 'linear', but can be any of the following: 'nearest' nearest neighbor interpolation 'linear' linear interpolation (default) 'spline' cubic spline interpolation 'pchip' piecewise cubic Hermite interpolation 'cubic' (same as 'pchip') 'v5cubic' Cubic interpolation used in MATLAB 5. 'next' next neighbor interpolation (Matlab R2014b or later) 'previous' previous neighbor interpolation (Matlab R2014b or later)vq = interp1gap(...,'interpval',vval) specifies a value with which to replace vq elements corresponding to large gaps. Default is NaN.vq = interp1gap(...,'extrap',extrapval) returns the scalar extrapval for out-of-range values. NaN and 0 are often used for extrapval.

ffts, Performs the fast Fourier transform (FFT) on scatter data. Yq = ffts(X,V,Xq) or Yq = ffts(X,V,Xq, method, window) inputs, X : Array with positions [m x 1] V : Array with values [m x 1] Xq : Node locations [ n x 1], with equally spaced points (see linspace) (optional) method : 1. 'grid', gridding (Default) 2. 'fit' , b-spline fit window : 1. 'bspline', bspline (default) 2. 'kaiser', kaiser Bessel function outputs, Fq : The Fourier spectrum of the scatter data [ n x 1]. Gridding: 1. The scattered values (V) at positions (X) are smoothed ( convolution ) by a kernel to a regular grid. With the grid a 2 times oversampled version of Xq 2. The data is multiplied with a set of density compensation weights. Calculate as in step 1, but instead all values are set to 1. The density compensation is 1 divided by this result. 3. The values on the regular grid are converted to the fourier domain using a FFT. 4. Trim field of view, to size of Xq. This compensates the oversampling of step 2. The sidelobes due to finite window size are now clipped off. 5. The fourier domain is multipled by an apodization correction function. Which is the 1/Fourier Transform of the kernel of step 1 to remove the effect of the kernel. B-spline fit: 1. B-splines sampled on a regular grid are fitted to the values (V) at positions (X), so they least squares approximate the data. 2. At the regular grid (Xq), values are interpolated using the fitted B-splines 3. The FFT is done on the b-spline interpolated points

The sifting process is completed using a time varying filter technique.The local cut-off frequency is adaptively designed by fully facilitating the instantaneous amplitude and frequency information. Then nonuniform B-spline approximation is adopted as a time varying filter. In order to solve the intermittence problem, a cut-off frequency realignment algorithm is also introduced. Aimed at improving the performance under low sampling rates, a bandwidth criterion for intrinsic mode function (IMF) is proposed. TVF-EMD is fully adaptive and suitable for the analysis of linear and non-stationary signals. Compared with EMD, the proposed method is able to improve the frequency separation performance, as well as the stability under low sampling rates. Besides, the proposed method is robust against noise interference.TVF-EMD is from http://www.sciencedirect.com/science/article/pii/S0165168417301135To use this code, please cite our work: Li, Heng, Zhi Li, and Wei Mo. "A time varying filter approach for empirical mode decomposition." Signal Processing 138 (2017): 146-158.

Programs to1.Plot a Triangular Patch2.To Draw Parametric Surface 3.To Draw Ruled Surface4.To Draw Parametric Line5.To Draw Parametric Ellipse6.To Draw Inclined Parametric Ellipse 7.To Draw Parametric Circle8.Midpoint Line Algorithm9.Midpoint Circle Algorithm10.Plot Parallel Line Parametric11.Plot Hermite Curve 12.Inscribe Ellipse Circle Inside Rectangle13.DDA Line Algorithm14.Plot Bezier Curve15.Plot Bezier Surface16.Plot B-Spline Curve

Computes the H-infinity optimal causal filter (indirect B-spline filter) for the cubic spline.[INPUT]d: delay[OUTPUT]psi: the optimal filter psi(z) in a TF objectgopt: optimal valueThis file is based on the following paper:M. Nagahara and Y. Yamamoto,H-infinity optimal approxmation for causal spline interpolation,Signal Processing, Vol. 91, No. 2, pp. 176-184, 2011.

Usage: [mfRefinedMesh, mnTriangulation] = LoopSubdivisionLimited( mfMeshPoints, mnTriangulation, fMinResolution, vbBoundaryEdges) This function sub-divides surface meshes, using the Loop subdivision algorithm [1]. This algorithm is based on B-spline curve continuity, leading to good shape-maintaining smoothing of a surface. The algorithm attempts to leave the boundary of the surface essentially undistorted.'mfMeshPoints' is an Nx3 matrix, each row of which ['x' 'y' 'z'] defines a point in three-dimensional space. 'mnTriangulation' is a Mx3 matrix, each row of which ['m' 'n' 'p'] defines a triangle existing on the surface, where 'm', 'n' and 'p' are indices into 'mfMeshPoints'.'fMinResolution' defines the desired minimum length of an edge in the final subdivision. Edges shorter than 'fMinResolution' will not be divided further.The optional argument 'vbBoundaryEdges' identifies which edges should be treated as boundary edges (and so should their locations should be attempted to be maintained by the algorithm). This argument will be calculated by the algortihm, if it is not supplied.'mfRefinedMesh' will be a Px3 matrix, each row of which specifies a vertex in the subdivided mesh. 'mnTringulation' will be a Rx3 matrix, each row of which specifies a surface triangle in the subdivided mesh.Algorithm from [1].*ROOM FOR IMPROVEMENT*If you work out how to maintain the vertex and edge adjacency matrices through a full subdivision run, then great! That would speed up subsequent runs a great deal, since a lot of the time is spent computing the edge adjacency matrix...References[1] Loop, C 1987. "Smooth subdivision surfaces based on triangles." M.S. Mathematics thesis, University of Utah. http://research.microsoft.com/en-us/um/people/cloop/thesis.pdf

We propose PRIMOR method that combines image reconstruction and motion estimation in a single algorithm. It extends previous prior-based reconstruction methods by including a model of the motion between consecutive frames into the cost functional. The resulting optimization problem is efficiently solved with the split Bregman formulation. Motion is estimated using a nonrigid registration method based on hierarchical B-splines. In this paper we compare PRIMOR with a prior-based reconstruction algorithm for respiratory gated CT, resulting in a significant reduction of artefacts and improved image quality. If you use this code, please reference the publication JFPJ Abascal et al. A novel prior- and motion-based compressed sensing method for small-animal respiratory gated CT. PLOS ONE 9;11(3):e0149841, 2016. If you need to contact the author, please do so at mabella@hggm.es, juanabascal78@gmail.comData, code and results for prior- and motion-based reconstruction (PRIMOR) method for respiratory gated CT

Using cubic B-splines, the natural cubic spline is calculated assuming equally spaced nodes. It is formatted so it functions in a similar manner as the MATLAB command "spline"

Implementation of the Borges-Pastva algorithm for fitting a single segment Bezier curve to an ordered set of data so as to minimize the total least squares distance. See Total least squares fitting of Bézier and B-spline curves to ordered dataCF Borges, T PastvaComputer Aided Geometric Design 19 (4), 275-289

The main file interpMatrix.m in this package creates a sparse Toeplitz-like matrix representing a regularly-spaced interpolation operation between a set of control points. The user can specify the interpolation kernel, the number of control points, the spacing between the control points, and certain boundary conditions governing the behavior at the first and last control point. The tool has obvious applications to interpolation, curve fitting, and signal reconstruction. More generally, the ability to represent interpolation as a matrix is useful for minimizing cost functions involving interpolation operations. For such functions, the interpolation matrix and its transpose inevitably arise in the gradient. The file Example1D.m in the package gives an example application of the tool to upsampling/signal reconstruction using cubic B-splines with different possible boundary conditions. The screenshot above shows the output of this example, and illustrates how improved signal reconstruction is obtained using boundary extrapolation by mirroring. Although the matrix generated by interpMatrix() is for 1D interpolation, it can be generalized to n-dimensional tensorial interpolation using kron(). However, a more efficient alternative to kron() is this tool, http://www.mathworks.com/matlabcentral/fileexchange/25969-efficient-object-oriented-kronecker-product-manipulationwhose usage in conjunction with interpMatrix() is illustrated in the file Example2D.m, a generalization of Example1D.m to two dimensions. USAGE: T=interpMatrix(kernel, origin, numCtrlPoints, CtrlPointSep, extraprule) out: T: sparse output matrix. The columns of T are copies of a common interpolation kernel (with adjustments for boundary conditions), but shifted in increments of the CtrlPointSep parameter (see below) to different control point locations. The result is that if x is a vector of coefficients, then T*x is the interpolation of these coefficients on the interval enclosed by the control points. in: kernel: vector containing samples of an interpolation function, shifted copies of which will be used to create the columns of T. This vector never needs to be zero-padded. Zero-padding is derived automatically from the other input arguments below. origin: Index i such that kernel(i) is located at the first control point. It is also possible to specify the origin using the following string options: 'max': origin i will be selected where kernel(i) is maximized. 'ctr': origin i will be selected as ceil((length(kernel)+1)/2). numCtrlPoints: number of control points in the system. CtrlPointSep: a stricly positive integer indicating the number of samples between control points. extraprule: Initially, the shifted copies of "kernel" form the columns of T. The columns are then modified to satisfy edge conditions indicated by the "extraprules" parameter. Options for this parameter are the strings 'zero', 'mirror','rep', 'circ', or 'allcontrib'. These are explained in the help doc.

Nonuniform Cubic Spline InterpolationThe program works for any combination of first or second derivative end conditions (so, as special cases, it includes natural and clamped cubic splines)If you want the formulas for the resulting cubics, use splineB.m

Calculates the clamped cubic spline using B-splines, for equally spaced points (i.e., xd(i+1)-xd(i)=h for all i).

bsn1.m implements a zerophase low pass filter using a novel structure called B-Spline Networks (BSN).This function was originally developed for use with the LFFC (learning feedforward control).A nice aspect is that, a parametric transfer functon can be obtained for BSN.For details, see Chen Y, Moore KL, Bahl V, "Improved Path Following of USU ODIS by Learning Feedforward Controller Using Dilated B-Spline Network," IEEE International Symposium on Computational Intelligence in Robotics and Automation, Banff, Alberta, Canada, July-August 2001, PDF file is athttp://www.csois.usu.edu/publications/pdf/pub049.pdf

This function takes a bunch of IGES-Files, looks for rational B-Splines (IGES entity 126) and makes a bending table out of each of the splines. If end and start point have the same coordinates, these splines get connected. Parameters control the way the splines get discretized and simplified. For each spline the bend table gets written as [filename]_bend_table_spline_[#].csv and for each file a summary gets written as [filename]_summary.txt, both in the same location as the IGES file.If there is a valid parameter file with the same name as a processed IGES file and extension .csv, the parameters from inside this file are used. If this file doesn't exist or You agree to overwrite it, it gets written with the parameters used in this run. Open the file to see the format. Parameters passed to the function overwrite the ones from the parameter file.To make things easier, export only the center lines of the hoses or tubes into the IGES file. Only curves, no surfaces, no bodies etc. Use attached Demo files for Solid Edge. To get the displacement angle of elbow fittings of crimped hose assemblies, model each fitting as a line perpendicular and connected to the end of the hose center line, showing in the direction of the fitting. Take care the the length of this line is longer than the minimal feed parameter (see below), otherwise this segment gets dropped.In the result summary, the desired value is the 'rotation angle between first and last spline segment', measured counterclockwise from the near end to the far end.see help for usage.This program is in a very early stage and probably won't get developed further. Not much testing occurred. Only tested with IGES export out of Solid Edge ST8. Use with caution. Check all results twice, before using them!

Below is the script for the Skin Lesion Segmentation Algorithm. If you use this script in anyway please cite the author.This script was developed and copyrighted by Tyler L. Coye (2015)This is a third release of this tool. In this release, I improved the segmentation by applying an iterative canny edge to the image mask. This has the added benefit of improving contour matching between the mask and original lesion. The method outlined below is a novel approach to lesion segmentation. No other script, to my knowledge, utilizes principlecomponent analysis for color to gray conversion or an iterative canny edge. The threshold level calculation is also unique to this script. It is typical to divide the sum of threshold levels by '4.' For this purpose, I have found it better to divide by '2.'Feel free to leave a comment and let me know how it has worked for you or how you have applied it to your work/research.-Tyler%% Hybrid Lesion Detection 2.0 is Copyrighted by Tyler Coye, 2015. % If you use this script please notify and cite the author. % If you have any questions regarding this script you can contact me at% Tylerlc6@gmail.com.% This Script Uses:% -Iterative Median Filtering% -2-D Wavelet Transformation% -2-D inverse Wavelet transformation% -Otsu Thresholding on individual dwt2 levels% -Canny edge detection% This is an improved version of the Hybrid Skin Lesion Detection% Algorithm. % The following changes were made:% Added Color to gray via PCA (novel method applied to this type of problem)% Added morphological closing% Removed ROI cropping% Iterative Canny Edge (novel method applied to this type of problem)% Read imageim = im2double(imread('th3.jpg'));% Convert RGB to Gray via PCAlab = rgb2lab(im);f = 0;wlab = reshape(bsxfun(@times,cat(3,1-f,f/2,f/2),lab),[],3);[C,S] = pca(wlab);S = reshape(S,size(lab));S = S(:,:,1);gray = (S-min(S(:)))./(max(S(:))-min(S(:)));% Morphological Closingse = strel('disk',1);close = imclose(gray,se);% Complement ImageK= imcomplement(close)%% 2-D wavelet Decomposition using B-Spline[cA,cH,cV,cD] = dwt2(K,'bior1.1');%% Otsu thresholding on each of the 4 wavelet outputsthresh1 = multithresh(cA);thresh2 = multithresh(cH);thresh3 = multithresh(cV);thresh4 = multithresh(cD);%% Calculating new threshold from sum of the 4 otsu thresholds and dividing by 2level = (thresh1 + thresh2 + thresh3 + thresh4)/2;% single level inverse discrete 2-D wavelet transformX = idwt2(cA,cH,cV,cD,'bior1.1')% Black and White segmentationBW=imquantize(X,level);%% Iterative Canny Edge (Novel Method)BW1 = edge(edge(BW,'canny'), 'canny');%% Post-ProcessingBW3 = imclearborder(BW1);CC = bwconncomp(BW3);S = regionprops(CC, 'Area');L = labelmatrix(CC);BW4 = ismember(L, find([S.Area] >= 100));BW5 = imfill(BW4,'holes');%% Present Final Image[B,L,N] = bwboundaries(BW5);figure; imshow(im); hold on;for k=1:length(B), boundary = B{k}; plot(boundary(:,2),... boundary(:,1),'g','LineWidth',2);end

**This script is a prototype. It works very well on the test images I used. I included two of them in the .zip. In a later upload I am going to overlay the symmetry lines on the RGB image of the lesion so as to better see where the lines exist. This is a continuation of my series of algorithms that have focused on skin lesions. For those interested I have also developed an algorithm for counting colors in a skin lesion:https://nl.mathworks.com/matlabcentral/fileexchange/50872-function-for-counting-colors-in-a-skin-lesionand I have developed an algorithm that simply segments the skin lesion:https://nl.mathworks.com/matlabcentral/fileexchange/50698-a-hybrid-skin-lesion-segmentation-tool--using-pca-and-iterative-canny-edgeThese scripts are good starts toward a fully automatic skin cancer diagnostic tool based on ABCD criteria. This script in particular can be modified to find lines of symmetry or similarity in binary image.Below is my script for determining the number of symmetry/similarity lines in a skin lesion. This is the first time the Jaccard Index has been applied to skin lesions.Feel free to comment and let me know how you have used it in your work or research. This algorithm is copyrighted by Tyler L. Coye (2015). Commercial use or publication is not authorized without the author's approval.Here are the general steps in the script below:1. Input RGB image of Skin Lesion2. Skin Lesion is segmented using an method I developed before and can be found on File Exchange3. A binary mask is created and the skin lesion is cropped out.4. The cropped image is rotated 2 degrees from 0 to 360. -At each 2 degree increment the image is flipped from left to right, resulting in A and A'. -A Jaccard index and distance is calculated for A and A' -Depending on the distance ( <.072 or 92.8% similarity), a symmetry line is either drawn or not. -This is done until the image has finished rotation. 5.The final count is outputted in the command window & the symmetry lines are drawn on the rotated images to show where they exist. NOTE: generally there are degrees that represent the same symmetry line, to overcome this, I divided the count by 2 to result in the final count. Your input should be an RGB image. For best results, it should be of a skin lesion of moderate size, say 300 x 300 or so. Larger images should be scaled down. If your skin lesion has hair, I would add the razor filter to the script below. This script does not work on skin lesions obstructed by hair. clear all%% ***PART I-SEGMENTATION***% Read imageI = imread('test3.jpg');im = im2double(I);% Convert RGB to Gray via PCAlab = rgb2lab(im);f = 0;wlab = reshape(bsxfun(@times,cat(3,1-f,f/2,f/2),lab),[],3);[C,S] = pca(wlab);S = reshape(S,size(lab));S = S(:,:,1);gray = (S-min(S(:)))./(max(S(:))-min(S(:)));% Morphological Closingse = strel('disk',1);close = imclose(gray,se);% Complement ImageK= imcomplement(close)% 2-D wavelet Decomposition using B-Spline[cA,cH,cV,cD] = dwt2(K,'bior1.1');%% Otsu thresholding on each of the 4 wavelet outputsthresh1 = multithresh(cA);thresh2 = multithresh(cH);thresh3 = multithresh(cV);thresh4 = multithresh(cD);% Calculating new threshold from sum of the 4 otsu thresholds and dividing by 2level = (thresh1 + thresh2 + thresh3 + thresh4)/2;% single level inverse discrete 2-D wavelet transformX = idwt2(cA,cH,cV,cD,'bior1.1')% Black and White segmentationBW=imquantize(X,level);% Iterative Canny Edge (Novel Method)BW1 = edge(edge(BW,'canny'), 'canny');% Post-ProcessingBW3 = imclearborder(BW1);CC = bwconncomp(BW3);S = regionprops(CC, 'Area');L = labelmatrix(CC);BW4 = ismember(L, find([S.Area] >= 100));BW51 = imfill(BW4,'holes');BW5 = imcomplement(BW51);imgbw = bwlabel(BW51); ss_mask = bwlabel(imgbw); stats = regionprops(ss_mask, 'BoundingBox', 'Area' ); Astats = [stats.Area]; idx = find(Astats == max(Astats)); ss_mask = ismember(ss_mask, idx); out_img = imcrop(BW51, stats(idx).BoundingBox); %% Search for Number Symmety Lines using Jaccard Similarity Coefficientcount = 0for k=0:2:360B = imrotate(out_img,k)C = fliplr(B)intersect = B & C;union = B | C;numerator = sum(intersect(:));denomenator = sum(union(:));jaccardIndex = numerator/denomenatorjaccardDistance = 1 - jaccardIndexif jaccardDistance < .072 % you can change this value ( The closer to zero, the more similar) count = count + 1; figure;hold on; D = imresize(B, 2)[m n] = size(D)imshow(D);line([m/2,m/2],[0,n],'Color','r','LineWidth',2)text(0,-12,strcat('\color{blue}\fontsize{16}Degrees:',num2str(k)))hold offendendfinal_count = count/2 % Generally you will get 2 k's that correspond to the same line, so you divide by 2.

Similar to "bsn1.m", "bsn2.m" provides dilation 2 in the B-Spline network (BSN) which are used as a new way of performing approximate zero-phase low pass filtering.The transfer function of the dilated BSN filter can be derived with one parameter.For more details, check the paperhttp://www.csois.usu.edu/publications/pdf/pub049.pdf

Given a duration and a frequency, this function can rapidly generate signals of different waveform types. The user may also optionally gate the signal on and off with a raised cosine ramp, as well as specify the starting phase and/or sample frequency. oscillator(wavetype,duration,frequency) Input arguments: wavetype (case insensitive string): 'Sinusoid' 'Triangle' 'Square' 'Sawtooth' 'Reverse Sawtooth' 'Linear Sweep' 'Log Sweep' 'FM signal' or 'fm' 'Click Train' or 'click' 'White Noise' or 'white' 'Octave Band' or 'octave' 'Half Octave' or 'half' 'Third Octave' or 'third' 'Quarter Octave' or 'quarter' 'Brown Noise' or 'brown' 'Pink Noise' or 'pink' 'Blue Noise' or 'blue' 'Violet Noise' or 'violet' 'Grey Noise' or 'grey' 'Speech Noise' or 'speech' duration (in seconds) frequency (in Hz) NOTE: linear and log sweeps require [start stop] frequency vector 'fm signals' require a num_samples-long vector of frequencies. Optional input arguments: oscillator(wavetype,duration,frequency,gate,phase,sample_freq) gate (in seconds): duration of a raised cosine on/off ramp phase (in radians): starting phase of the waveform. sample_rate (in samples): 44100 is default, custom rates are possible Examples: wave = oscillator('Sinusoid',1,1000); % simple pure tone at 1000 Hz. wave = oscillator('Sawtooth',2,440); % 2 second sawtooth at 440 Hz. wave = oscillator('Pink Noise',1); % 1 second of pink (1/F) noise wave = oscillator('Linear Sweep',2,[440 880]); % linear sweep from 440 to 880 Hz. wave = oscillator('Log Sweep',2,[20 20000],.01); % ramped on/off log sweep. wave = oscillator('FM',1,freq_vector); % signal changing in freq over time wave = oscillator('White Noise',1,[],0.1); %ramped on and off noise wave = oscillator('Half Octave',1,440); %half octave noise, 440 Hz centre. wave = oscillator('Sinusoid',1,220,0,pi/2,48000); %pure tone with a starting phase of 90 degrees and sample rate set to 48000. Omitting 'wavetype' sets it to sinusoid, omitting 'duration' sets it to one second, and omitting 'frequency sets it to 440 Hz. Gate is set to 0, phase to 0 and sample rate to 44100; All output waves are scaled to a peak absolute value of 1.0 For the 'FM signal' option, the function should be provided with a vector of frequencies (in Hz) that is the same duration and sample frequency as the desired signal. Failing this, the function will *attempt* to resample the frequency vector using spline interpolation. YMMV. Note that the 'grey noise' is *generic* grey and is based the ISO 66-phon Equal-loudness contour.(c) W. Owen Brimijoin - MRC Institute of Hearing ResearchTested on Matlab R2011b and R14

NURBS Toolbox official site ishttp://www.aria.uklinux.net/nurbs.php3% NURBS Toolbox. % Version 1.0 % % demos - NURBS demonstrations % % nrbmak - Construct a NURBS from control points and knots. % nrbtform - Applying scaling, translation or rotation operators. % nrbkntins - Knot insertion/refinement. % nrbdegelev - Degree elevation. % nrbderiv - NURBS representation of the derivative. % nrbdeval - Evaluation of the NURBS derivative. % nrbkntmult - Find the multiplilicity of a knot vector. % nrbreverse - Reverse evaluation direction of the NURBS. % nrbtransp - Swap U and V for NURBS surface. % nrbline - Construct a straight line. % nrbcirc - Construct a circular arc. % nrbrect - Construct a rectangle. % nrb4surf - Surface defined by 4 corner points. % nrbeval - Evalution of NURBS curve or surface. % nrbextrude - Extrude a NURBS curve along a vector. % nrbrevolve - Construct surface by revolving a profile. % nrbruled - Ruled surface between twp NURBS curves. % nrbcoons - Construct Coons bilinearly blended surface patch. % nrbplot - Plot NURBS curve or surface. % % bspeval - Evaluate a univariate B-Spline. % bspderiv - B-Spline representation of the derivative % bspkntins - Insert a knot or knots into a univariate B-Spline. % bspdegelev - Degree elevation of a univariate B-Spline. % % vecnorm - Normalise the vectors. % vecmag - Magnitaude of the vectors. % vecmag2 - Squared Magnitude of the vectors. % vecangle - Alternative to atan2 (0 <= angle <= 2*pi) % vecdot - Dot product of two vectors. % veccross - Cross product of two vectors. % vecrotx - Rotation matrix around the x-axis. % vecroty - Rotation matrix around the y-axis. % vecrotz - Rotation matrix around the z-axis. % vecscale - Scaling matrix. % vectrans - Translation matrix. % % deg2rad - Convert degrees to radians. % rad2deg - Convert radians to degrees.

Below is the script for the function Cscore which I developed. Cscore segments and extracts a skin lesion, then outputs the final color score based on the ABCD criteria.Copyrighted by Tyler L. Coye (2015)If you use this script in your research in anyway, please cite me as the author.YOU WILL NEED THE imoverlay.m that can be found HERE: https://www.mathworks.com/matlabcentral/fileexchange/50839-a-novel-retinal-blood-vessel-segmentation-algorithm-for-fundus-images/content/sample/imoverlay.mfunction [Finalscore, A] = Cscore(image)***PART I-SEGMENTATION***% Read imageI = imread(image);I = imresize(I, [200 200])im = im2double(I);% Convert RGB to Gray via PCAlab = rgb2lab(im);f = 0;wlab = reshape(bsxfun(@times,cat(3,1-f,f/2,f/2),lab),[],3);[C,S] = pca(wlab);S = reshape(S,size(lab));S = S(:,:,1);gray = (S-min(S(:)))./(max(S(:))-min(S(:)));% Morphological Closingse = strel('disk',1);close = imclose(gray,se);% Complement ImageK= imcomplement(close)% 2-D wavelet Decomposition using B-Spline[cA,cH,cV,cD] = dwt2(K,'bior1.1');%% Otsu thresholding on each of the 4 wavelet outputsthresh1 = multithresh(cA);thresh2 = multithresh(cH);thresh3 = multithresh(cV);thresh4 = multithresh(cD);% Calculating new threshold from sum of the 4 otsu thresholds and dividing by 2level = (thresh1 + thresh2 + thresh3 + thresh4)/2;% single level inverse discrete 2-D wavelet transformX = idwt2(cA,cH,cV,cD,'bior1.1')% Black and White segmentationBW=imquantize(X,level);% Iterative Canny Edge (Novel Method)BW1 = edge(edge(BW,'canny'), 'canny');% Post-ProcessingBW3 = imclearborder(BW1);CC = bwconncomp(BW3);S = regionprops(CC, 'Area');L = labelmatrix(CC);BW4 = ismember(L, find([S.Area] >= 100));BW51 = imfill(BW4,'holes');BW5 = imcomplement(BW51)% overlay with a green outer maskout = imoverlay(im, BW5, [0 1 0])%% *** PART 2 - Lesion Color Scoring Method***im=im2double(out)% Convert to floating pointsr=im(:,:,1);g=im(:,:,2);b=im(:,:,3);% Calculate number of pixels with a given colorRblack = sum(sum(im(:,:,1)<.2 & im(:,:,2)<.2 & im(:,:,3)<.2));Rwhite = sum(sum(im(:,:,1)>.8 & im(:,:,2)>.8 & im(:,:,3)>.8));Rred = sum(sum(im(:,:,1) >.8 & im(:,:,2)<.2 & im(:,:,3)<.2));Rlightbrown = sum(sum(im(:,:,1)>.6 & im(:,:,1)<1 & im(:,:,2)>0.32 & im(:,:,2)< 0.72 & im(:,:,3)>.05 & im(:,:,3)<.45));Rdarkbrown = sum(sum(im(:,:,1)>.2 & im(:,:,1)<.6 & im(:,:,2)>0.06 & im(:,:,2)< 0.46 & im(:,:,3)>0 & im(:,:,3)<.33));Rbluegray =sum(sum(im(:,:,1)<.2 & im(:,:,2)>0.32 & im(:,:,2)< 0.72 & im(:,:,3)>.34 & im(:,:,3)<.74));% Calculate total pixels[rows columns numberOfColorChannels] = size(im);numberOfPixels = rows*columns;% Calculate number of green pixels (this is everything outside of the% lesion)Rgreen = sum(sum(im(:,:,1)==0 & im(:,:,2)==1 & im(:,:,3)==0));% Take the difference between total and green to get total lesion pixelstlp = numberOfPixels - Rgreen% Score individual colorsif (Rblack/tlp)*100>5 a=1else a=0endif (Rwhite/tlp)*100>5 b=1else b=0endif (Rred/tlp)*100>5 c=1else c=0endif (Rlightbrown/tlp)*100>5 d=1else d=0endif (Rdarkbrown/tlp)*100>5 e=1else e=0endif (Rbluegray/tlp)*100>5 f=1else f=0end% Build Data tableColor = {'Black';'White';'Red';'Light Brown';'Dark Brown';'Blue Gray'};Color_Count = [Rblack;Rwhite;Rred;Rlightbrown;Rdarkbrown;Rbluegray];Percentage = [(Rblack/tlp)*100;(Rwhite/tlp)*100;(Rred/tlp)*100;(Rlightbrown/tlp)*100;(Rdarkbrown/tlp)*100;(Rbluegray/tlp)*100];Score = [a;b;c;d;e;f]A = table(Color_Count,Percentage,Score,'RowNames',Color)% Determine final scoreG = [a, b, c, d, e, f];Finalscore = sum(G>0)%% Present Image[B,L,N] = bwboundaries(BW51);figure; imshow(im); hold on;for k=1:length(B), boundary = B{k}; plot(boundary(:,2),... boundary(:,1),'g','LineWidth',2); hold on himage = imshow(im);set(himage, 'AlphaData', 0.5);text(0,-10,strcat('\color{red}\fontsize{12}Estimated Color Score:',num2str(Finalscore)))endend

Shape Context is a method to get an unique descriptor (feature vector) for every point of an object contour or surface. This descriptor is used in combination with a b-spline free form deformation grid, for fully automatic creation of point mappings between surfaces of patient datasets (2D/3D). The 2D Example will create a corresponding point model (PCM) for a set of 10 2D hand contours.The 3D Example will create a PCM for a set of 10 3D jaw triangulated surfaces.There are also examples using the PCM's to train and use 2D/3D Active Shape Models (ASM) and Active Appearance Models (AAM). (folder "ActiveModels_version7") The 2D example takes a couple of minutes. The 3D-example about 7 hours, and requires 64bit Matlab.The non-rigid mapping between datasets is kept Diffeomorphic to prevent mesh folding. But the optimizer doesn't succeeded in all cases of the 3D-example (maybe in next update)Notes:- The examples, will compile some c-coded functions into MEX files. In case of failure slower Matlab coded functions will be used.- Most functions in this zip-archive are also available as standalone files on the File-Exchange, and can be newer/updated.

Quadratic Spline Interpolation with first linear spline is provided by this code.In the first two lines the user has to insert data points (x), (y) and then execute the program.The output of this code is the coefficients of quadratic equation in a vector (Coeff) sorted by [a1;b1;c1;a2;b2;c2.....,an;bn;cn].A plot is generated contains the interpolated data in red circles, quadratic splines.

MOMS (maximal-order-minimal-support) functions give the least number of supports for a given approximation order L. The stringent requirement of number of supports is critical to real-time signal processing, which is why sinc (the kernel that gives ideal reconstruction) is not used in practice. B-spline based interpolating kernels are usually used in spline interpolation. MOMS functions are constructed by b-spline functions. Here we provide an implementation of O-MOMS (optimal MOMS), which outperforms b-spline kernels of the same degree. This implementation uses DTFT to compute the coefficients in the prefiltering step [Thevenaz 2000]. For boundary condition, we assume periodic, which is scheduled to be changed to mirroring in the next release.Degrees of 0 through 5 are supported.Related papers:T. Blu et al.Minimal Support Interpolators with Optimum Approximation Properties, ICIP 1998;MOMS: Maximal-Order Interpolation of Minimal Support, IEEE Transactions on Image Processing Vol. 10, No.7, 2001;Phillippe Thevenaz et al.Interpolation Revisited, IEEE Transactions on Medical Imaging, Vol. 19, No. 7, 2000.

This function "refinepatch" can refine any triangular mesh surface ( patch) with 4-split spline interpolation, see screenshot.Literature:The spline interpolation of the face edges is done by the Opposite Edge Method, described in: "Construction of Smooth Curves and Surfaces from Polyhedral Models" by Leon A. ShirmanHow it works:The tangents (normals) and velocity on the edge points of all edges are calculated. Which are later used for b-spline interpolation when splitting the edges.A tangent on an 3D line or edge is under defined and can rotate along the line, thus an (virtual) opposite vertex is used to fix the tangent and make it more like a surface normal.B-spline interpolate a half way vertices between all existing vertices using the velocity and tangent from the edge points. After splitting a new face list is constructed with 4 times as many faces.Implementation:Some Matlab files are also available as MEX files to allow quick refinement of large meshes.Please Leave a comment, if you find a bug, like the code or know improvements.

%Fits the so called restricted cubic spline via least squares (see Harrell %(2001)). The obtained spline is linear beyond the first and the last %knot. The truncated power basis representation is used. That is, the %fitted spline is of the form: %f(x)=b0+b1*x+b2*(x-t1)^3*(x>t1)+b3*(x-t2)^3*(x>t2)+...%where t1 t2,... are the desired knots. %95% confidence intervals are provided based on the bootstrap procedure.%For more information see also:%Frank E Harrell Jr, Regression Modelling Strategies (With application to %linear models, logistic regression and survival analysis), 2001, %Springer Series in Statistics, pages 20-21.%%INPUT ARGUMENTS:%x: A vector containing the covariate values x.%y: A vector of length(x) that contains the response values y.%knots: A vector of points at which the knots are to be placed.% Alternatively, it can be set as 'prc3', 'prc4', ..., 'prc8' and 3 % or 4 or...8 knots placed at equally spaced percentiles will be used.% It can also be set to 'eq3', 'eq4', ...,'eq8' to use 3 or 4 or ... % or 8 equally spaced knots. There is a difference in using one of % these strings to define the knots instead of passing them directly % as a vector of numbers and the difference involves only the% bootstrap option and not the fit itself. When the bootstrap is used% and the knots are passed in as numbers, then the knot sequence will% be considered fixed as provided by the user for each bootstrap % iteration. If a string as the ones mentioned above is used, then % the knot sequence is re-evaluated for each bootstrap sample% based on this choice.%%OPTIONAL INPUT ARGUMENTS: (These can be not reached at all or set as [] to%proceed to the next optional input argument):%%bootsams: The number of bootstrap samples if the user wants to derive 95% CIs.%atwhich: a vector of x values at which the CIs of the f(x) are to be evaluated.%plots: If set to 1, it returns a plot of the spline and the data.% Otherwise it is ignored. This input argument can also not be reached % at all. (It also plots the CIs provided that they are requested).%%OUTPUT ARGUMENTS:%bhat: the estimated spline coefficients.%f: a function handle from which you can evaluate the spline value at a% given x (which can be a scalar or a vector). For example ff(2) will % yield the spline value for x=2. You can use a vector (grid) of x values to% plot the f(x) by requesting plot(x,f(x)).%sse: equals to sum((y-ff(x)).^2)%knots: the knots used for fitting the spline.%CI : 95% bootstrap based confidence intervals.% Obtained only if the bootstrap is requested and and only fot the % points at which the CIs were requested. Hence, CI is a three column% matrix with its first column be the spline value at the points% supplied by the user, and the second and third column are% respectively the lower and upper CI limits for that points.%%References: Frank E. Harrell, Jr. Regression Modeling Strategies (With %applications to linear models, logistic regression, and survival%analysis). Springer 2001. %%%Code author: Leonidas E. Bantis, %Dept. of Statistics & Actuarial-Financial Mathematics, School of Sciences%University of the Aegean, Samos Island.%%E-mail: leobantis@gmail.com%Date: January 14th, 2013.%Version: 1.

This is a class for efficiently representing and manipulating N-fold Kronecker products of matrices (or of objects that behave like matrices) in terms of their operands only. Given matrices {A,B,C,D,...} and a scalar s, an object M of this class can be used to represent Matrix = s * A kron B kron C kron D kron ... (Eq. 1) where "A kron B" denotes kron(A,B), the Kronecker product of A and B. Internally, however, M stores the operands {s,A,B,C,D,...} separately, which is typically far more byte-compact than numerically expanding out the RHS of Eq. 1. Furthermore, many mathematical manipulations of Kronecker products are more efficient when done in terms of {s,A,B,C,D,...} separately than when done with the explicit numerical form of M as a matrix. The class overloads a number of methods and math operators in a way that exploits the Kronecker product structure accordingly. Among these methods/operators are: mtimes (*), times (.*) , tranpose (.') , ctranpose (') , rdivide (./), ldivide (.\), mldivide (\), mrdivide (/), inv, pinv, power, mpower, norm, sum, cond, eig, svd, abs, nnz, orth, chol, lu, qr, full, sparse, ... Some restrictions apply to these overloads. In particular, bi-operand math operations involving two KronProd objects, e.g. M1*M2, typically require the operands of each KronProd to be of compatible sizes. However, I find these restrictions to be satisfied often in applications. Consult "help KronProd/methodname" for more info on each method. Optionally also, read krontest.m for demonstrations of their use. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% EXAMPLE #1: A primary application of this class is to efficiently perform separable tensorial operations, i.e., where a linear transform is applied to all columns of an array, then all rows, and so on. The following example is of a separable transformation of a 3D array X that transforms all of its columns via multiplication with a non-square matrix A, then transforms all rows by multiplication with B, then finally transforms all 3rd-dimensional axes by multiplication with C. Two approaches to this are compared. The first approach uses kron(). The second uses the KronProd class. Other operations are also shown for illustration purposes. Notice the orders of magnitude reduction both in CPU time and in memory consumption, using the KronProd object. %DATA m=25; n=15; p=40; mm=16; nn=n; pp=10; A=rand(mm,m); B=pi*eye(n); C=rand(pp,p); s=4; % a scalar X=rand(m,n,p); %METHOD I: based on kron() tic; Matrix = s*kron(C,kron(B,A)); y1 = Matrix*X(:); %The tensorial transformation y1=reshape(y1,[mm,nn,pp]); z1 = Matrix.'*y1(:); w1 = Matrix.'\z1; toc; %Elapsed time is 78.729007 seconds. %METHOD II: based on KronProd object tic; Object = KronProd({A,pi,C},[1 2 3],[m,n,p],s); %equivalent to Matrix above y2 = Object*X; % This operation could also have been implemented % as y2=reshape( Object*X(:) , [mm,nn,pp]); z2 = Object.'*y1; w2 = Object.'\z1; toc % Elapsed time is 0.003958 seconds. %%%ERROR ANALYSIS PercentError=@(x,y) norm(x(:)-y(:),2)/norm(x(:),'inf')*100; PercentError(y1,y2), % = 3.0393e-012 PercentError(size(y1),size(y2)), % = 0 PercentError(z1,z2), % = 1.3017e-012 PercentError(w1,w2), % = 4.3409e-011 %%%MEMORY FOOTPRINT >> whos Matrix Object Name Size Bytes Class Attributes Matrix 2400x15000 288000000 double Object 2400x15000 8102 KronProd %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%EXAMPLE #2: As a more practical example, the KronProd class is very useful in conjunction with the following tool for signal interpolation/reconstruction:http://www.mathworks.com/matlabcentral/fileexchange/26292-regular-control-point-interpolation-matrix-with-boundary-conditionsAn example involving 2D signal reconstruction using cubic B-splines is provided in the file Example2D.m at the above link.

IIR Increases the size of an image by interpolationB= IIR(inputfile,f) returns the image stored in file 'inputfile' with resolution increased by factor f in both dimensions. 'filename' must be a valid graphic file (jpg, gif, tiff, etc.). It can be grayscale or color. Parameter 'f' is the size increase ratio, so to increase by 50% use f= 1.5, to double size (in each dimension) use f= 2.Additional parameters: B= IIR(A,f,'Display','off') eliminates display of both images, the original and the modified. Deafult 'on'B= IIR(A,f,'Method',method) Allows to choose between five methods of interpolation: linear, spline, pchip, cubic or v5cubic. 'method' must be a string character. Default 'linear'Example:B= iir('myimage.jpg',2);The screenshot shows the effect of increasing resolution by 3. Original size: 600x402. After: 2400x1608. (I took the photo myself. No copyright problems).

A common request is to interpolate a set of points at fixed distances along some curve in space (2 or more dimensions.) The user typically has a set of points along a curve, some of which are closely spaced, others not so close, and they wish to create a new set which is uniformly spaced along the same curve.When the interpolation is assumed to be piecewise linear, this is easy. However, if the curve is to be a spline, perhaps interpolated as a function of chordal arclength between the points, this gets a bit more difficult. A nice trick is to formulate the problem in terms of differential equations that describe the path along the curve. Then the interpolation can be done using an ODE solver.As an example of use, I'll pick a random set of points around a circle in the plane, then generate a new set of points that are equally spaced in terms of arc length along the curve, so around the perimeter of the circle.theta = sort(rand(15,1))*2*pi;theta(end+1) = theta(1);px = cos(theta);py = sin(theta); 100 equally spaced points, using a spline interpolant.pt = interparc(100,px,py,'spline');% Plot the resultplot(px,py,'r*',pt(:,1),pt(:,2),'b-o')axis([-1.1 1.1 -1.1 1.1])axis equalgrid onxlabel Xylabel Ytitle 'Points in blue are uniform in arclength around the circle'You can now also return a function handle to evaluate the curve itself at any point. As well, CSAPE is an option for periodic (closed) curves, as long as it is available in your matlab installation.[~,~,foft] = interparc([],px,py,'spline');foft(0:0.25:1)ans = 0.98319 0.18257 -0.19064 0.98151 -0.98493 -0.17486 0.18634 -0.98406 0.98319 0.18257

Brief introduction==================nu_corrector is a tool for correcting intensity non-uniformity artifact of image. Here, non-uniformity refers to image artifacts of vignetting and bias (e.g. intensity inhomogeneity, illumination etc.). This tool is an implementation of our single-image based vignetting or bias correction systems based on the sparsity property of image gradient distribution.nu_corrector can correct vignetting with about 0.7 second and bias with about 0.9 second for an image in size of 750x580 through my experiments using a common computer.Information=================="Definition of vignetting and bias":Vignetting refers to the phenomenon of brightness attenuation away from the image center, and is an artifact that is prevalent in photography. Vignetting is generally assumed to be radially symmetric.Bias of image denotes the spatial variations of intensity/color caused by illumination changes for images taken by a digital camera, by in-homogenious magnetic field for MR images obtained with an MRI machine, or by non-uniform X-ray beam for CT images acquired with a CT scanner. Bias is a smooth field in any format, which can be represented by for example a bipoly model, B-Spline, etc."Harm of vignetting and bias":Vignetting and bias can significantly impair computer vision algorithms that rely on precise intensity data. They include photometric methods such as shape from shading, appearance-based techniques such as object recognition and image mosaicing, and many other applications such as image segmentation, image registration, and feature extraction.

PEAKFIND general 1D peak finding algorithmTristan Ursell, 2013.peakfind(x_data,y_data)peakfind(x_data,y_data,upsam)peakfind(x_data,y_data,upsam,gsize,gstd)peakfind(x_data,y_data,upsam,htcut,'cuttype')peakfind(x_data,y_data,upsam,gsize,gstd,htcut,'cuttype')[xpeaks]=peakfind()[xout,yout,peakspos]=peakfind()This function finds peaks without taking first or second derivatives, rather it uses local slope features in a given data set. The function has four basic modes. Mode 1: peakfind(x_data,y_data) simply finds all peaks in the data given by 'xdata' and 'ydata'. Mode 2: peakfind(x_data,y_data,upsam) finds peaks after up-sampling the data by the integer factor 'upsam' -- this allows for higher resolution peak finding. The interpolation uses a cubic spline that does not introduce fictitious peaks. Mode 3: peakfind(x_data,y_data,upsam,gsize,gstd) up-samples and then convolves the data with a Gaussian point spread vector of length gsize (>=3) and standard deviation gstd (>0). Mode 4: peakfind(x_data,y_data,upsam,htcut,'cuttype') up-samples the data (upsam=1 analyzes the data unmodified). The string 'cuttype' can either be 'abs' (absolute) or 'rel' (relative), which specifies a peak height cutoff which is either: 'abs' - finds peaks that lie an absolute amount 'htcut' above the lowest value in the data set. for (htcut > 0) peaks are found if peakheights > min(yout) + htcut 'rel' - finds peaks that are an amount 'htcut' above the lowest value in the data set, relative to the full change in y-input values. for (0 < htcut < 1) peaks are found if (peakheights-min(yout))/(max(yout)-min(yout)) > htcutUpsampling and convolution allows one to find significant peaks in noisy data with sub-pixel resolution. The algorithm also finds peaks in data where the peak is surrounded by zero first derivatives, i.e. the peak is actually a large plateau. The function outputs the x-position of the peaks in 'xpeaks' or the processed input data in 'xout' and 'yout' with 'peakspos' as the indices of the peaks, i.e. xpeaks = xout(peakspos).If you want the algorithm to find the position of minima, simply input '-y_data'. Peaks within half the convolution box size of the boundary will be ignored (to avoid this, pad the data before processing). Example 1: x_data = -50:50; y_data =(sin(x_data)+0.000001)./(x_data+0.000001)+1+0.025*(2*rand(1,length(x_data))-1); [xout,yout,peakspos]=peakfind(x_data,y_data); plot(x_data,y_data,'r','linewidth',2) hold on plot(xout,yout,'b','linewidth',2) plot(xout(peakspos),yout(peakspos),'g.','Markersize',30) xlabel('x') ylabel('y') title(['Found ' num2str(length(peakspos)) ' peaks.']) box on Example 2: x_data = -50:50; y_data =(sin(x_data)+0.000001)./(x_data+0.000001)+1+0.025*(2*rand(1,length(x_data))-1); [xout,yout,peakspos]=peakfind(x_data,y_data,4,6,2,0.2,'rel'); plot(x_data,y_data,'r','linewidth',2) hold on plot(xout,yout,'b','linewidth',2) plot(xout(peakspos),yout(peakspos),'g.','Markersize',30) xlabel('x') ylabel('y') title(['Found ' num2str(length(peakspos)) ' peaks.']) box on