Typically, NACK-based congestion control is dismissed as being not viable due to the common notion that “open-loop” congestion control is simply “difficult.” Emerging real-time streaming applications, however, often rely on rate-based flow control and would benefit greatly from scalable NACK-based congestion control. This talk sheds new light on the performance of NACK-based congestion control and measures the amount of “difficulty” inherently present in such protocols. We specifically focus on increase-decrease (I-D) congestion control methods for real-time, rate-based streaming. First, we introduce and study several new performance measures that can be used to analyze the class of general I-D congestion control methods. These measures include monotonicity of convergence to fairness and packet-loss scalability. Second, under the assumptions that the only feedback from the network is packet loss, we show that AIMD is the only TCP-friendly method with monotonic convergence to fairness. Furthermore, we find that AIMD possesses the best packet-loss scalability among all TCP-friendly binomial schemes and show how poorly all of the existing methods scale as the number of flows is increased. Third, we show that if the flows can obtain the knowledge of an additional network parameter (i.e., the bottleneck bandwidth), the scalability of AIMD can be substantially improved. We conclude the talk by studying the performance of a new scheme, called Ideally-Scalable Congestion Control (ISCC), both in simulation and a NACK-based MPEG-4 streaming application over a Cisco testbed.