MarketingDistillery.com is about web analytics, data science and marketing strategy

3 responses to “A/B Tests in Marketing – Sample Size and Significance using R”

  1. Steve

    I love this type of analysis, and although it is quite over my head, I believe it is relevant to clients of the type I have where a/b testing can be risky due to low traffic volumes on the clients’ websites.

    I’m interested to know – and I apologize for possibly reducing the complexity too much here – if there is a reverse path that could be at all valid for cases where the client website really doesn’t have sufficient traffic. By reverse path, I mean: if I started with a sample size of X giving me a/b tests, whats the most validity I could expect to avail of – or alternatively, how much difference in the results would I need to see with a given sample size in order to judge it significant?

    Here’s a more concrete example of what I am trying to ask: If I have a brand new website with almost no traffic, but I want to identify any major issues it might have by bringing 50 random visitors (for example, via a PPC campaign), and 25 of those visitors see version a and 25 see version b, how much benefit might I be able to expect? Or maybe a better question could be, how much measured difference between the a group and the b group results would I need to detect in order for my sample size of 25 per group to become useful/valid/significant?

    Hopefully I haven’t made a fool of myself here by taking the article in a reverse direction, but part of my question relates to identifying how much money it would cost in terms of paid visitors in order to get valid a/b tests happening on new websites.

  2. Significance Testing and Sample Size | Daniel Nee

    […] Note: This post was heavily influenced by Marketing Distillery's A/B Tests in Marketing. […]

Leave a Reply

You must be logged in to post a comment.