I've occasionally been annoyed with how various sites and applications do ratings based on small sample sizes vs big sample sizes. For instance, user reviews.
The simple approach to calculating a rating is to divide the number of positive reviews with the total. This works fine when you have a big number of reviews, but can give wildly varying results when you have a very small number of reviews.
Not being a mathematician nor a statistician myself, I'll freely admit to having fallen into the trap of extrapolating from a limited sample size myself in the past. But being aware of the problem of working with small sample sizes is one of the reasons why statistics is so important to learn.
I recently ran across an article talking about this issue1. It illustrates how this problem might be solved using the Wilson score interval.2 This allows one to calculate that the actual rating is at least a given amount, with a high degree of confidence, even with a small sample size.
I'd recommend you read the article (link below) before implementing any sorting or rating system, especially where you don't have a big sample size.