Stochastic control over multi-class queueing systems studies the problem of optimally allocating shared resources to provide differentiated services to different types of random traffic. These problems have important applications in computer communication networks, manufacturing systems, and CPU scheduling problems. A promising approach in the past is to study the achievable performance region of the desired performance vector, and then use static optimization techniques such as linear programming to solve the optimal control policy. A limitation of this approach is that a complete description of the achievable performance region is not always available, and that the optimal control policy is hard to construct when we have nonlinear objectives with time average constraints. To solve these problems, we propose a new stochastic optimization framework that combines the achievable region approach in queueing systems and the Lyapunov drift theory suitable for optimizing dynamic systems with time average constraints. The Lyapunov drift theory is originally developed for stochastic control over time-slotted wireless networks, and is recently generalized to optimize dynamic systems that have a renewal structure. Our framework transforms the multi-class queueing problems into network stability ones, for which we constructs dynamic control policies with provable near-optimal performance. The resulting policies are dynamic priority policies with very simple updates, and only require limited statistical knowledge of the system. In this talk we focus on the problem of optimizing per-class average queueing delay and average power consumption in a multi-class queueing system, whose service rate is controllable by dynamic power allocations. Future directions will be discussed.