I am not quite so negative on $\log\ (y + 1)$ or more generally $\log \
(y + c)$ for some constant $c$ as some colleagues. For $y$ read also $x$ according to taste or circumstance.
But three points seem general:
(1) $c$ is arbitrary beyond needing to be large enough to ensure that $y +c > 0$ (but not ipso facto utterly pointless)
(2) Do you have a good reason for your choice of $c$ therefore (which might involve some sensitivity analysis)?
(3) You need to show that a transformation works in the sense of achieving or getting closer to at least one specific and plausible goal.
Concretely, $\log\ ({\rm count} + 1)$ works sometimes for visualization of counts when zeros are present but otherwise the data seem to deserve something like a log scale. But that doesn't imply modelling in those terms, particularly as Poisson regression (or the same rose under another name) is in effect use of a logarithmic link function compatible with some observed zeros whenever conditional means are positive.
Height of people may be a flippant example, but it is highly unconvincing in the senses that heights are often roughly symmetrically distributed; zeros are never observed; and adding 1 cm is on the face of it just as arbitrary as adding 1 inch (or its equivalent) or 1 mm would be.
I've sometimes found cube roots useful, as accommodating zeros (and indeed negative values) as easily as positive values. It is a weaker transformation than logarithms, but has other virtues, being about right for gamma-like distributions and sometimes being appropriate generally on dimensional grounds. Hydrologists and meteorologists often use cube roots for precipitation (especially daily precipitation) where zeros may be observed, and often observed frequently.
Some would want to raise a flag for asinh, or inverse hyperbolic sine.
In general, transforming to get closer to a linear relationship with even scatter is often a much bigger deal than being obsessed with the shape of marginal distributions. Also, as already hinted, transforming to get a better visualization can be ad hoc (optimistic translation: producing something fit for purpose) -- but if it makes things easier to see or think about, that is the whole point.
Many languages or environments now include a function log1p() on other grounds. I doubt that many people (e.g. natural or social scientists) are aware of that name. If you use it, it would be best to explain it.
Note. I will mention a misunderstanding I have seen many times, the idea that $\log\ (x + {\rm smidgen})$ -- where ${\rm smidgen}$ is very small -- is a neat solution if $x$ is ever $0$. For any tiny number that is close to $\log x$ for $x \gg 0$ but it can create massive outliers inadvertently for $x$ very small. A quick numerical example uses log base 10 for convenience, but the point applies with any base. Suppose counts vary $0, 1, 2, \dots$ and we use as ${\rm smidgen}\ 0.000001 = 10^{-6}$ or $1$/million. Then $0$ gets mapped to $-6$, $1$ to almost $0$, $2$ to about $0.3010$ and so forth. As said, you've created massive outliers.
This is one reason why the automatic reflex to "just add 1 to values that might be zero before taking the log" is difficult to justify.– Dave Nov 06 '23 at 13:30