Answer from cs61c-ax (Amanda Alfonso 15906918) for Question 2 A "subnormal" (aka a denormal number) is a number with the same exponent as zero but a nonzero significand. They are used to try and get more precision from a floating-point operation, but also provide problems for programmers who may or may not expect subnormals in their programs. (and who otherwise must handle subnormals gracefully!) Subnormals also allow for "gradual underflow" - that is, they allow a number to degrade in significance until it becomes 0. In fact, the smallest single precision denormalized number is a power of 2^-23 times smaller than the smallest single precision normalized number.