Back to Listing

Better (AI?) way to evaluate A/B tests

Current A/B testing makes it impossible to properly see what combination of variation wins, especially if we can combine it with spintax and have a lot of variations.

It would be great to have rank ordered results for different variations so we can quickly identify:

1) what combination wins

2) insights from top performing combinations of copy within a specific template

Upvoters
0 voter
AuthorDavid
Date12 days ago