I am working on a dataset based on different sites, where each site has several transects. However, the sampling effort is not the same for each site (8th or 6th order sampling strength), which poses a problem for my further statistical analysis. By doing some research, it seems that adopting a sampling approach across the dataset makes the most sense to correct my data.
The idea would be to randomly collect 6 transects per site, to have the same sampling effort. And to repeat this sampling a large number of times using a for loop.
I can't figure out how to design this loop properly, any ideas?
Here is a brief overview of my data with sites from 1.1 to 13.2 :
