The way to incorporate weights into a tree is in the split criteria. These are sample weights, meaning each observation $i$ has an associated weight to it $w_i$.
E.g., in the case of classification with the Gini impurity, we calculate it for node $t$:
$$GINI_t = 1-\sum_{c=1}^C p_{t,c}^2
$$
The $p_{t,c}$ are calculated according to these weights:
$$ p_{t,c} = \frac{\sum_{y_i=c,i \in t} w_i}{\sum_{i \in t} w_i}
$$
Where $i\in t$ means that the observation is in node $t$. Notice that when the weights $w_i$ are all equal to 1, this is the regular proportion of each class in a node $=\frac{n_{tc}}{n_t}$.
Now, there could be a case where we have an imbalanced data-set, or for what-ever other reason, we would like to give different weights to different classes (e.g., we care more about a certain class than another). This is done by setting class weights: which is simply giving identical sample weights to each class.
For example, a popular way to balance an imbalanced dataset is by giving each class weights equal to the inverse proportion of that class. Suppose I have 100 observations, 80 of class 0, and 20 of class 1. If I give a weight of $\frac{100}{80}$ to class 0, and $\frac{100}{20}$ to class 1, they will have equal weights:
$$ 80\cdot \frac{100}{80} = 20 \cdot \frac{100}{20}
$$
In order that the weight is normalized to $n=100$, I also have to divide by the number of classes, in this case 2.
This is what sklearn does. When you set class_weight in a DecisionTreeClassifier, it will calculate the sample weights according to what I mentioned. The code here calls the code here to calculate sample weights according to the class weights. Then the code here will calculate the Gini impurity according to the sample weights.
Update: I made a video about this on my channel, in case you're interested.