As whuber notes, the authors of the canopy clustering algorithm suggest that T1 and T2 can be set with cross-validation. However, these parameters could be tuned in the same way as any other hyper-parameter. One of the most common techniques is grid search, where a range is specified for each parameter, as well as a step size for how parameters are changed at each iteration. For example, suppose we specified T1 to have a value range of 25 to 100 with a step size of 25. This would mean the possible values of T1 to try would be (25, 50, 75, 100). Likewise, we could set T2 to have possible values between 1-4, with a step size of 1, such that the possible values are (1,2,3,4). This would mean there were 16 possible sets of parameters to try. As with any other classification or clustering algorithm, would you assess its efficacy by calculating its F1-score, accuracy/error, or other performance metric to determine the best set of the 16 sets of parameters. In addition to grid search, other hyper-parameter optimization algorithms include Nelder-Mead, genetic algorithms, simulated annealing, and particle swarm optimization, among many others. These algorithms will help you determine appropriate values for T1 and T2 in an automated fashion.
You noted above that you have an 100K-dimensional data set. Are you referring to the number of rows or the number of columns within your data? If you are referring to the number of columns, I would suggest performing some combination of feature selection based on the variance of individual features and feature extraction via principal component analysis (PCA) or Kernel-PCA. Even if many of your features are useful (i.e. provide an information gain towards discriminating between clusters/classes/output variable values), having too many features might mean your clustering algorithm is unable to determine appropriate distances between instances.