Instead of the nice matrix algebra elaboration Carlos presented and testing R's linearHypothesis, such a test can also be done in a more old fashioned way.
First the F value must be calculated by hand. This can be done after running two models, model A or the "full" model and model B, the "restricted" model.
model A: $y = b_0 + b_1x_1 + b_2x_2 + b_3x_3+b_4x_4$
model B: $y = b_0 + b_1x_1 + b_1x_2 + b_1x_3+b_4x_4$
In model B the regr. coefficients of the first three predictors are equal, as the hypothesis assumes. The model can also be written as:
model B: $y = b_0 + b_1(x_1 + x_2 + x_3) + x_4$
having the $sum = x_1 + x_2 + x_3$ as independent variable. So, model B can be estimated as follows:
model B: $y = b_0 + b_1sum + b_4x_4$
The F value can now be calculated by filling in the R-squares of model A and B in the formula below:

df1 = difference in nr. of regr. coeff. between model A and B = 5 - 3 = 2
df2 = nr. of cases - nr. of regr. coeff. in model A = N - 5
Next, the F value can be looked up in a table with tail probabilities of F distributions. This is really old school, I admit, but it should be possible with any software package able to run linear regression.
Of course, in R we would apply Carlos' method. One could also run the two models above and next run anova(modelA, modelB).
***** SPSS regression method.
Procedure "regression" in SPSS only allows users to drop several predictors at once from a linear model equation, but not to equate regression coefficients. However, we can write model A in a different way to obtain model A_new:
Model A_new: $y = b_0 + b_1(x_1 + x_2 + x_3) + b_2x_2 + b_3x_3 + b_4x_4$
Model A_new has the same R square as model A! Dropping the independents $x_2$ and $x_3$ from model A_new renders model B. The R square change when going from model A_new to model B can be tested as follows:
regression
/dependent y
/enter sum x4
/test (x2 x3)