MATLAB Examples

## Contents

PMPack contains all sorts of helpful routines for evaluating and computing univariate and multivariate quadrature rules.

In this demo, we'll see the functions:

• parameter
• jacobi_matrix
• jacobi_eigenvecs

To get started, we need understand where quadrature rules "exist" in PMPack. In brief, they exist with a parameter.

x = parameter(); % consruct a parameter over [-1,1] (this is the default) [p,w] = gaussian_quadrature(x,5) % build a quadrature rule for that parameter 
p = -0.906179845938664 -0.538469310105683 0.000000000000000 0.538469310105683 0.906179845938664 w = 0.118463442528094 0.239314335249683 0.284444444444445 0.239314335249683 0.118463442528094 

Important Note The Quadrature weights in PMPACK are designed to integrate to 1 Thus, to get the "true" quadrature weights, we must scale the result by two. To see this, consider the integral of the function 1 over the interval [-1,1]

[p,w] = gaussian_quadrature(parameter,5); % remake the quadrature rule sum(1.*w) 
ans = 1.000000000000000 

But the integral should be two! That's because we are normalizing the weights differently. To correct, multiply the weight by two. Or in general, multiply by the size of the region of integration.

sum(1.*w*2) 
ans = 2.000000000000000 

Sticking a quadrature rule into the parameter might seem a bit strange, but it's rather handy when we want to combine information from different types of parameters, such as when generating multivariate quadrature rules.

x = parameter(); y = parameter('legendre',1,2); % construct a parameter over [1,2] [p,w] = gaussian_quadrature([x,y],3) 
p = -0.774596669241483 1.112701665379258 -0.774596669241483 1.500000000000000 -0.774596669241483 1.887298334620741 -0.000000000000000 1.112701665379258 -0.000000000000000 1.500000000000000 -0.000000000000000 1.887298334620741 0.774596669241483 1.112701665379258 0.774596669241483 1.500000000000000 0.774596669241483 1.887298334620741 w = 0.077160493827161 0.123456790123457 0.077160493827161 0.123456790123457 0.197530864197531 0.123456790123457 0.077160493827161 0.123456790123457 0.077160493827161 

What we get back is the tensor product quadrature rule now. Because all the right information is associated with a parameter, then our codes return the correct points for using a quadrature rule with mixed parameters.

## The parameters themselves

We now breifly demonstrate some of the details of the parameter objects. A parameter itself is a rather simple object

x = parameter() 
x = name: 'jacobi(0,0) with support [-1,1]' recur: @(n)jacobi_recur(n,l,r,a,b) l: -1 r: 1 

It is just a matlab struct with a few different fields. The name field identifies the type. In this case, we created a legendre_parameter or a jacobi(0,0) parameter. These are equivalent parameters from a quadrature and othogonal polynomial point of view. The l and r fields identify the left and right endpoints of the parameter. The most interesting field is the "recur" field. This field is a function that returns the recurrence coefficients for the orthogonal polynomials associated with this parameter.

ab = x.recur(5) a = ab(:,1); b = ab(:,2); 
ab = 0 1.000000000000000 0 0.333333333333333 0 0.266666666666667 0 0.257142857142857 0 0.253968253968254 

These recurrence coefficients define the orthogonal polynomials associated with a parameter:

We can encode all of these relationships into a tridiagonal Jacobi martrix. Let be the vector . Let

Then

We let you create the matrix J, known as the Jacobi matrix, easily:

J = jacobi_matrix(x,4) 
J = 0 0.577350269189626 0 0 0.577350269189626 0 0.516397779494322 0 0 0.516397779494322 0 0.507092552837110 0 0 0.507092552837110 0 

Recall the nodes of an n-point quadrature rule is given by the zeros of the n-th orthgonal polynomial. To find these, observe:

where only appears in the final term. If is 0, then

This equation is just the eigenvalue eigenvector equation for the matrix Hence, the eigenvalues of are the zeros of .

The quadrature points are given by the eigenvalues of this matrix

d = eig(J) 
d = -0.861136311594053 -0.339981043584856 0.339981043584856 0.861136311594053 

The quadrature weights are given by the first element of the eigenvectors squared.

[V,D] = eig(J) w = V(1,:).^2 
V = -0.417046067681650 0.571027650321132 -0.571027650321132 0.417046067681650 0.622037490330198 -0.336257878158577 -0.336257878158577 0.622037490330198 -0.571027650321132 -0.417046067681650 0.417046067681650 0.571027650321132 0.336257878158577 0.622037490330199 0.622037490330198 0.336257878158577 D = -0.861136311594053 0 0 0 0 -0.339981043584856 0 0 0 0 0.339981043584856 0 0 0 0 0.861136311594052 w = 0.173927422568727 0.326072577431273 0.326072577431273 0.173927422568727 

We have one more convinence function, which returns the matrix V

V2 = jacobi_eigenvecs(x,4) norm(V-V2) 
V2 = -0.417046067681650 0.571027650321132 -0.571027650321132 0.417046067681650 0.622037490330198 -0.336257878158577 -0.336257878158577 0.622037490330198 -0.571027650321132 -0.417046067681650 0.417046067681650 0.571027650321132 0.336257878158577 0.622037490330199 0.622037490330198 0.336257878158577 ans = 0 

## Convergence

Let's use the quadrature tools to study the following function

f = @(x) sin(exp(x)).^100; 

First, a plot:

fplot(f,[-1,1]) 

To ascertain the accuracy of our quadrature routines, we computed a rather accurate answer in Mathematica.

trueval = 0.1598249861160262281333129409016500342546674544621735068837516103348; 

Let us see how we converge to this answer

ks = 1:5:100; errs = zeros(length(ks),1); for ki=1:length(ks) k = ks(ki); [p,w] = gaussian_quadrature(legendre_parameter(),k); val = 0; for pi=1:k val = val + f(p(pi))*2*w(pi); % notice the two here to correct for % the weight difference end errs(ki) = abs(val-trueval); end semilogy(ks,errs,'o-');