The main point is that you cannot reframe your hypothesis based on the data you have already observed. The results will never generalize to another sample.
The "sign" of the trend for each hypothesis shouldn't matter theoretically. What we care about is the correlation between tests; in the case that tests are highly correlated, we know a Bonferroni correction would be conservative. Effects that are of opposite signs in a sample can come from probability models where the tests are highly positively correlated, or in fact any scenario can be dreamt up here.
But, alas, you didn't apply a Bonferroni correction! You would need to compare p-values to the $0.05/k$ alpha level! There is basically nothing you could have done to conserve the familywise error rate (FWER) and find a significant result. The FWER is a well defined operating characteristic of multiple testing. When you refer to "tak[ing] a family of tests", testing each hypothesis at the overall $\alpha$ level is already an anti-conservative approach - the actual false positive error rate is higher than stated, i.e. it is statistical cheating, or "p-hacking".
Based on this, you should report the results as-is and be done with it!
taken as a family of tests there is a clear relationship between X1, X2, . . . , Xk and Y, indicating the variables are not independent.– Sal Mangiafico Feb 06 '23 at 17:48