My team works to resolve online tasks assigned to them through a queue system; time taken to clear each task is measured (called handle_seconds). Approximately 18% of the tasks turn out to be defective. I want to check if there is a relationship between time taken to complete the task (handle_seconds) and the chance that output is defective.
One option I considered is bucketing the number of tasks and defects based on time window of, say every minute of handle_seconds and see the trend of share of defects in each window. But the handle_seconds range from 16 seconds to sometime a few hours, so the range is very large.
I wanted to know if there is a more accurate and established way of solving this, such as a statistical test.