While reinforcement learning has recently been able to achieve unprecedented success, it often comes at the cost of high sample complexity. Reward-free, unsupervised skill learning promises an efficient alternative by pre-training skills in the environment without access to task supervision. However, such pre-training methods are inefficient and are often ineffective in evolving environments. One reason for this is that current skill discovery methods learn all the skills simultaneously, which can cause a circular dependency in training -- the learning of one skill is intricately connected to the simultaneously learning of other skills. In this work, we propose a new framework for skill discovery, where skills are learned one after another in an incremental fashion with the previously learned skills kept fixed. This breaks the inter-dependency of skills, which allows them to learn efficiently and adapt to changing environments. We demonstrate experimentally on several MuJoCo environments that learning incrementally improves performance on discovering skills that are diverse (high intra-skill variance) and self-consistent (low inter-skill variance), which in turn improves downstream reward-based task learning. In environments with evolving dynamics, incremental skills significantly outperform current state-of-the-art skill discovery methods on both skill quality and the ability to solve downstream tasks. Videos for learned skills and code will be made public on: https://sites.google.com/view/discovery-of-incremental-skill