Are the functions polynomials as the example you mentioned?
If they are, assume that the order of the polynomial is N (in your example would be 4). Then you can derivate until order N-2, which will leave you something like f^(N-2)(x)=alpha x^2 + beta x+ gamma, and compute the zero there using the usual formulas for quadratic equations. The point here is that these two zero crossings in the N-2 derivative are relative extrema in the N-3 derivative. In consequence, there will be three zeros, one at the left, one between, and one at the right of the zeros found for the N-2 derivative. To find these zeros, you then apply the Newton-Raphson method starting from any point inside that interval, although it may well be a better idea to do a binary search trying to get near the zero crossing to avoid divergence of the method. You repeat this until you find the zeros of the original function.
Recall that the derivatives can be easily computed by writing the coefficients in a vectors. In your case, it would be f_0=[1.7,-2.5,-5,7.2,-1.2]. This could be called the derivative of order zero. The first derivative f^1 can be obtained by doing f_1 = f_0(1:end-1) .* numel(f_0)-1:-1:1, the second derivative by f_2 = f_1(1:end-1).* numel(f_1)-1:-1:1, and so on.
If they are not polynomials, then it is a problem because you do not know how many zeros there may be. What you can do then is to take any two points for which the sign of the function differs and apply the Newton Raphson method. Use this point to separate the interval in two new intervals and do the same procedure for each interval. This can be done with a recursive algorithm, that is, a function that calls to itself until no sign change is found.
Hope this helps.