One of the novel benefits of virtualization is the ability to emulate many hosts with a single physical machine. This approach is often used to support at-scale testing for large-scale distributed systems. In order to better understand the precise ways in which virtual machines differ from their physical counterparts, we have started to quantify some of the timing artifacts that appear to be common to two modern approaches to virtualization. Here we present several systematic experiments that highlight four timing artifacts, and begin to decipher their origins within virtual machine implementations. These micro-benchmarks serve as a means to better understand the mappings that exist between virtualized and real-world testing infrastructure. Our goal is to develop a reusable framework for micro-benchmarks that can be customized to quantify artifacts associated with specific cluster configurations and workloads. This type of quantification can then be used to better anticipate behavioral characteristics at-scale in real settings.