Explainability is a hot research area. Recently, newer tools have been developed to explain tree ensemble models using a handful of human understandable rules. Here are a few options for explaining tree ensemble models, that you can try:
You can use TE2Rules (Tree Ensembles to Rules) to extract human understandable rules to explain a scikit tree ensemble (like GradientBoostingClassifier). It provides levers to control interpretability, fidelity and run time budget to extract useful explanations. Rules extracted by TE2Rules are guaranteed to closely approximate the tree ensemble, by considering the joint interactions of multiple trees in the ensemble.
Another, alternative is SkopeRules, which is a part of scikit-contrib and RuleFit. SkopeRules extract rules from individual trees in the ensemble and filters good rules with high precision/recall across the whole ensemble. This is often quick, but may not represent the ensemble well enough.
For developers who work in R, InTrees package is a good option.
References:
TE2Rules: You can find the code: https://github.com/groshanlal/TE2Rules and documentation: https://te2rules.readthedocs.io/en/latest/ here.
SkopeRules: You can find the code: https://github.com/scikit-learn-contrib/skope-rules here.
Intrees: https://cran.r-project.org/web/packages/inTrees/index.html
Disclosure: I'm one of the core developers of TE2Rules.