It is known that the stochastic gradient "descent" method applied to an optimization problem with a strongly convex objective function guarantees the sublinear convergence rate given a diminishing step-size. The diminishing step-size is required to reduce variance of the observed noisy gradient. On the other hand, an appropriately chosen constant step-size allows the method to converge geometrically to a fixed neighborhood of the optimal point in expectation, where this neighborhood is specified by the variance of the gradient estimates. But can the algorithm be modified in such a way that a geometric convergence to the optimum itself is achieved? This work demonstrates that the answer to this question is negative, even if the gradient function is Lipschitz continuous.