We explore the relationship between two ideas in network information theory: edge removal and strong converses. Edge removal properties state that if an edge of small capacity is removed from a network, the capacity region does not change too much. Strong converses state that, for rates outside the capacity region, the probability of error converges to 1. We define various notions of edge removal and strong converse, depending on how edge capacity and residual error probability scale with blocklength, and we prove relations between them. In particular, each class of strong converse implies a specific class of edge removal. The opposite direction is more difficult to prove, and indeed does not hold in general. We prove the opposite direction for deterministic networks, and present some preliminary results for noisy networks.