IIUC, for both of your types of missing data, the fact that the features are missing could be deducible, at least partially with some error, from the remaining features, which means that both are MNAR.
If you are reluctant to discard the data with missing features, you might want to think about imputation, i.e. replacing the missing features with values that have been chosen as a function of the non-missing features, while this function has been learned by some ML algorithm. There are various libraries available which can help you with that. And this would be an appropriate procedure for the non-structurally missing part of your data.
But in the case of the 20% structurally missing values, it doesn't seem appropriate to do imputation. For e.g. filling in missing features related to farming for someone who is e.g. a teacher doesn't really make sense.
I don't really know much about your goals except that you finally want to do regression, but maybe the following might still help.
If you plan to use a data-driven approach for your regression with some more complex models like random forest, gradient boosting machines, or deep neural networks, it is sufficient to just fill in some constant value for all the missing instances of a feature and the model will hopefully figure out itself that this is not carrying information.
But if you are planning a model-based approach, using standard models like e.g. linear or generalized linear (mixed effect) models, you should set those values to zero, so that in the design matrix the columns of e.g. the farming related features will contain zeros for all rows belonging to non-farmers, so there will be no contribution of those features for non-farmers. Thus, structurally missing values will be equivalent to removing the effect of this feature.