Sure. Let $s$ be the feature that represents the square-footage of the house, and $v$ a feature vector containing all other features. Then I claim that every statistical model that has no jumps at $s=100$ (and consists of two linear submodels as you specify) must have the form
$$f(s,v) = \begin{cases}
\alpha \cdot v + bs + c &\text{if } s<100\\
\alpha \cdot v + b's + c' &\text{if } s\ge 100,
\end{cases}$$
where $\alpha,b,c,b'$ are arbitrary and $c' = c + 100(b-b')$.
In particular, the coefficients associated with every feature other than square-footage must be the same for both linear submodels.
Consequently, we can use optimization to solve for $\alpha,b,c,b'$ that minimize the total loss over the dataset. Here the loss for a data point $(s,v,y)$ is $(f(s,v)-y)^2$, and the total loss is the sum of the losses of each point in the dataset. The total loss is a quadratic function of $\alpha,b,c,b'$. Now minimize this loss using Newton's method or some other optimization algorithm to find the optimal values for $\alpha,b,c,b'$. If you need an initialization for $\alpha,b,c,b'$, start by using linear regression to fit a single linear model to the entire dataset, then copy the coefficients into both submodels so both submodels are initially identical, and let the optimizer improve from there.
Hopefully this also illustrates that if you want to insist that there are no jumps at $s=100$, using two linear submodels doesn't give you much additional expressiveness over a single linear model.