The first step towards proving the o-minimality of $\Ran$ is to show that the quantifier-free definable sets have finitely many connected components. As discussed in this post, this means (essentially) that we need to show that basic $\Pc{R}{X}$-sets have finitely many connected components.

**Example 1**

Let $\alpha \in \NN^n$ and $r \in (0,\infty)^n$. Show that the sets $\set{x \in \RR^n:\ x^\alpha = 0} \cap B(r)$ has finitely many connected components–which and how many?

**Exercise**

Let $F \in \Pc{R}{X}_r$ be such that $F(0)=0$. Show that $\lim_{s \to 0} \|F\|_s = 0$.

**Example 2**

Let $F \in \Pc{R}{T}$, where $T$ is a single indeterminate. Then, by Exercise 13.3, for any $s < r$ we have $F = T^k \cdot G$, where $k = \ord(F)$ and $G \in \Pc{R}{T}_s$ is such that $G(0) \ne 0$. This means that $G(T) = a + H(T)$, for some $a \ne 0$ and $H \in \Pc{R}{T}_s$ with $H(0) = 0$. So by the exercise above, there exists $s>0$ such that $G(t) \ne 0$ for $t \in [-s,s]$. It follows that, for $t \in [-s,s]$, we have $$F(t) = 0 \quad\text{iff}\quad t^k = 0;$$ that is, zeroset of $F$ inside $[-s,s]$ is the same as that of the monomial $t^k$.

**Exercise 14**

Let $\alpha_1, \dots, \alpha_k \in \NN^n$, $\sigma:\{1, \dots, k\} \into \{-1,0,1\}$ and $r \in (0,\infty)^n$. Show that the set $$B\left(r,X^{\alpha_1}, \dots, X^{\alpha_k}, \sigma\right):= \set{x \in B(r):\ \sign\left(x^{\alpha_i}\right) = \sigma(i)}$$ has finitely many connected components.

By Exercise 14, the procedure of Example 2 generalizes to the following series in several indeterminates:

**Definition**

A series $F \in \Ps{R}{X}$ is **normal** if there exist $\alpha \in \NN^n$ and $G \in \Ps{R}{X}$ such that $G(0) \ne 0$ and $F = X^\alpha \cdot G$.

**Corollary**

*Let $F_1, \dots, F_k \in \Pc{R}{X}$ be normal and $\sigma:\{1, \dots, k\} \into \{-1,0,1\}$. Then there exists a polyradius $r$ such that the set $$B\left(r,F_1, \dots, F_k, \sigma\right):= \set{x \in B(r):\ \sign\left(F_i(x)\right) = \sigma(i)}$$ has finitely many connected components.* $\qed$

While every series in one indeterminate is normal, the polynomial $X_1 + X_2$ is not normal.

**Example 3**

The formal substitution $(X_1,X_2) \mapsto (X_1, X_1 X_2)$ changes the polynomial $X_1+X_2$ to the polynomial $X_1 + X_1X_2$, which is normal. On the other hand, inside the right half-plane $\{x_1 > 0\}$, the zeroset of $x_1+x_2$ is the image of the zeroset of $x_1 + x_1x_2$ under the homeomorphism $(x_1,x_2) \mapsto (x_1,x_1x_2)$.

So, by the previous corollary with $F_1 = X_1$ and $F_2 = X_1 + X_1X_2$, the zeroset of $x_1+x_2$ inside the right half-plane $\{x_1>0\}$ has finitely many connected components.

Finally, inside the line $\{x_1=0\}$, the zeroset of $x_1+x_2$ is just $\{0\}$, which is connected, while the zeroset of $x_1+x_2$ inside the left half-plane $\{x_1<0\}$ has finitely many connected component by a symmetric argument as for the right half-plane.
Example 3 suggests the following:
**Strategy**

In the general situation we try, as in Example 3, to find a substitution given by some convergent power series, such that the given series become normal after substitution, and such that the function defined by the substitution is a homeomorphism outside a proper coordinate subspace. Inside this proper coordinate subspace, the problem of counting connected components is reduced to fewer indeterminates, which suggests we proceed by induction on the number of indeterminates.

Strategies of this kind are known as **normalization algorithms**. The one we shall use here is inspired by Bierstone and Milman’s algorithm and can be found here.

Algorithms of this kind fall under the general heading of **resolution of singularities**, as discussed by Bierstone and Milman in this paper.

For Exercise 14 I have been sketching an argument which is difficult to write down. It goes something like this:

Divide $B(r)$ into sub-blocks of the form:

1) $2^n$ different blocks in which each $x_i$ is either positive (on the whole block) or negative (on the whole block).

2) When $x_i=0$, we have $2^{n-1}$ different blocks where each $x_j$ with $j \ne i$ is either positive or negative.

3) When $x_i=x_j=0$ for fixed $i \ne j$, we have $2^{n-2}$ different blocks where each $x_k$ with $i \ne k \ne j$ is either positive or negative.

…

n+1) When $x_1=x_2=…=x_n=0$ we have one ($2^0$) block, the origin.

This is finitely many disjoint (path-)connected sets which partition $B(r)$. Also, if any two blocks are “adjacent” (the distance between the sets is 0), then their union is a (path-)connected set.

Now it seems clear that if a point in one of these “blocks” satisfies the sign conditions given (sign$(x^{\alpha_i})=\sigma(i)$ for each $\alpha_i$), then the whole block satisfies the sign conditions.

This should bound the number of connected components by the number of “blocks” defined above (although actually the bound should be quite a bit lower- $2^n$?).

The difficulty stems from the fact that some of the entries in the tuple $\alpha_i$ may be zero. Does this suggest an inductive approach on the number of zero-entries in $\alpha_i$?

A couple of revisions to my earlier comment.

Firstly, I don’t know why I was so set on using an induction- this argument should work without anything else.

Secondly, the definition of “adjacent” which I mentioned is no good. In fact, every one of the blocks has distance 0 from any other block. A better definition would be: $A$ and $B$ are “adjacent” if either $B \subseteq \overline{A}$ or $A \subseteq \overline{B}$.

Thirdly, I don’t know why I struggled to count the number of blocks defined. There are precisely $3^n$ different blocks (3 choices for each coordinate $x_i$). The bound should still only be $2^n$ though because the union of two adjacent blocks forms one connected component.

The answer—that is, that the number of components of any of the sets in Exercise 14 is bounded by $3^n$—is correct. Without worrying about a better bound, how can you rewrite your counting into a coherent proof of this statement?

Consider $x=(x_1,…,x_n) \in B(r)$. The regions described are those in which the sign of each component $x_i$ is constant. There are three possibilities for each component (positive, negative or 0) and $n$ different components which means $3^n$ different blocks.

This can also be recovered by my original (overcomplicated) counting of blocks by noticing that we have $2^n+ {n \choose 1}2^{n-1} + … + {n \choose n}2^0$ which is the binomial expansion of $(2+1)^n$.

But why does this answer Exercise 14…?

The proof should have gone like this:

Let $A:= \{ x \in B(r) \ |$ sign $x^{\alpha_i}=\sigma(i)$ for each $i \}$.

Write $A=(A \cap B_1) \cup … \cup (A \cap B_{3^n})$ where $B_i$ is an enumeration of the blocks defined (in some order). Each $A \cap B_i$ is either empty or just $B_i$. The adjacent $B_i$’s swallow each other and what you’re left with is a decomposition of $A$ into disjoint connected components.