If my computer history is not wrong, no one would use the initial guess obtained from the quadratic polynomial you provided and use the algorithm you suggested. This is due to two reasons: floating-point division was much more expensive than multiplication (up to 32x -or more if implemented in software- compared to 4x today) and obtaining the initial guess from the quadratic polynomial is more expensive than one step of the Newton iterations.
In fact, when I was teaching, I had my students analyze an algorithm which computes the reciprocal of a number without using division. Here is the excerpt:
For computers, it is easy to add, subtract and multiply. However, it
is harder to divide, exponentiate, take logarithms, etc. With all of
their ingenuity, many smart people offered solutions to make these
operations more efficient. For example, SRT division algorithm uses
lookup tables, and it is commonly used in microprocessor design (see
Pentium FDIV bug for a historical issue regarding Intel's
implementation of SRT algorithm). We can also use Newton's method to
reduce division to addition, subtraction and multiplication.
a. Find a function $f(x)$ such that the root $x^*$ of $f$ is the reciprocal of a given non-zero number $d$. Newton's method for the function $f$ MUST only contain addition, subtraction and multiplication as operations.
b. Define the error $\varepsilon_i = 1 - dx_i$, show that $\varepsilon_i = \varepsilon_{i-1}^2$. Hence, the convergence of Newton's method for finding the reciprocal is quadratic.
c. Assume that $0.5\leq d\leq 1$. If we choose our initial guess $x_0 = p(d) = \frac{48}{17}-\frac{32}{17}d$, what is the maximum value of the error $\varepsilon(d) = 1 - dp(d)$ in absolute value? Show your work.
d. Assuming $0.5\leq d\leq 1$ and $x_0 = p(d) = \frac{48}{17}-\frac{32}{17}d$, what is the maximum number of steps $k$ required to ensure $\varepsilon_k<10^{-17}$?
e. Explain why we would want to avoid other stopping criteria and exclusively use the number of iterations as the stopping condition in this case.
And here is a solution:
a. $f(x) = 1/x - d$ works for this purpose, since $f(1/d) = 1/(1/d) - d = d - d = 0$ and $$x_{i+1} = x_i - f(x_i)/f^{\prime}(x_i) = x_i - \frac{1/x_i - d}{-1/x_i^2} = x_i + x_i(1-dx_i)= x_i(2-dx_i) $$
b. \begin{align*}
\varepsilon_{i+1} &= 1 - dx_{i+1} = 1 - dx_i(2-dx_i) \\
&= d^2x_i^2 - 2dx_i + 1 = (1 - dx_i)^2 \\
&= \varepsilon_i^2.
\end{align*}
c. Consider $\varepsilon(d) = 1 - dp(d)$. Since $\varepsilon(d)$ is continuous on the interval $[0.5,1]$, by E.V.T, there must be extrema of $\varepsilon(d)$ on this interval. We know that these extrema are achieved either at the end points or at the critical points. Note that, $\varepsilon(0.5)=1/17$ and $\varepsilon(1)=1/17$. $\varepsilon^{\prime}(d) = \frac{48}{17}-\frac{32}{17}d + d(-\frac{32}{17}) = \frac{48}{17}-\frac{64}{17}d$ is zero only if $d=3/4$, but $\varepsilon(3/4)=-1/17$. So we can conclude that the maximum value of the error in absolute value is $1/17$.
d. From part b, we know that $\varepsilon_i = \dots = \varepsilon_0^{2^i}$ and from part c, we know that $|\varepsilon_0|<1/17$ for any initial guess $x_0 = p(d)$. So $\varepsilon_{i+1}< (1/17)^{2^i}$ and we want $(1/17)^{2^i}<10^{-17}$. Applying $\log_{10}$ to both sides, we obtain $2^i\log_{10}(1/17)<-17$ or, equivalently, $2^i>17/\log_{10}(17)$. Further, applying $\log_2$ to both sides, we get $i>\log_2(17/\log_{10}(17))\approx 3.79$. Therefore, if we do $4$ iterations of Newton's method, our solution will be guaranteed to satisfy the error tolerance.
e. In some applications (for example this one), the cost of checking for convergence may be more expensive than the cost of an iteration, and it may be fine to do 1 or 2 extra iterations. Since, the Newton's method for this problem is quadratically convergent, hence, we reach the desired tolerance in just 4 iterations, it is -probably- not worth to check for other stopping criteria.
The analysis in part c in relevant to your question: you want to find an initial guess which is going to minimize the maximum error, similar to Federico Poloni's answer. However, the polynomial you find will be dependent on your error function.
Also, in your question, you provide the coefficients as floating-point numbers but afaik they would be computed exactly (by hand) and would be presented as rational numbers so that it would be robust independent of the floating-point system. That might be another reason why we don't see the points of equi-oscillation assuming Remez' algorithm is used to compute the coefficients.
(For the division algorithm, the quadratic polynomial which gives a better initial guess is $p(d):={140}/{33}+{-64}d/{11}+{256}d^2/{99}$ for example. The error in this initial guess should be -in absolute sense- less than or equal to ~0.01, and this is the best polynomial quadratic fit in the sense of minimizing the $L_{\infty}$ error over the interval $[0.5,1)$.
Again, if I remember correctly, this polynomial is obtained through using Remez' algorithm on $\max_{d\in [0.5,1]} |1-p(d)d|$)