The gradient boosting machine has recently become one of the most popular learning machines in widespread use by data scientists at all levels of expertise. Much of the rise in popularity has been driven by the consistently good results Kaggle competitors have reported over several years of competition. Many users of gradient boosting machines remain a bit hazy regarding the specifics of how such machines are actually constructed and of where the core ideas for such machines come from. Here we want to discuss some details of the shape and size of the trees in gradient boosting machines.