Calculate execution time in Rust
2019-01-17
Background
I'm working on porting some code from Java to Rust so that it can run in the browser on Web Assembly. Right now, I'm curious to know how performant this code is, running on my local machine, when compared to the original Java implementation.
In this short post, I'll be exploring some ways to figure out rough execution time for a function. This is definitely not going to be a very scientific approach, but my goal is just to make sure I'm not doing worse than I did in the Java space.
What we'll be doing
The plan is to have a Rust program that's an abstraction of something I've done in the Java space and run that through some ways to measure execution time, more or less. To keep things simple, I'll stick to a basic CLI program and also not make use of any external libraries where possible.
All this program does, is run some pointless loops and wastes some CPU time. Nothing fancy, but it's enough for now to demonstrate what we want to achieve.
The base program
fn take_along_time() {
let mut _total = 0;
for _i in 0..10000 {
for _j in 0..5000 {
_total += 1;
}
}
}
fn main() {
println!("Hello, let's start!");
take_along_time();
println!("Well, that's that!");
}
Simplest, most naive way of calculating start and end time
To start with, let's just measure the start and end time of calling our function. In short, we'll record the time just before we call our function and then the time when it's completed. The difference should give us a very rough indication of how long it took to run the calculation.
If we use SystemTime from the standard library, we can get the time right before we call our function, then the time when it completes and finally, the difference between them. I have removed the start and stop println messages.
When I run this, I get what looks like an average of around 2050ms, more or less.
use std::time::{SystemTime};
fn take_along_time() {
let mut _total = 0;
for _i in 0..10000 {
for _j in 0..5000 {
_total += 1;
}
}
}
fn main() {
let start = SystemTime::now();
take_along_time();
let end = SystemTime::now();
let elapsed = end.duration_since(start);
println!("The time is probably {}ms", elapsed.unwrap_or_default().as_millis());
}
What about Instant
Rust has a rich standard library, and one of the things we find there is Instant. Instant is described as:
Instants are always guaranteed to be no less than any previously measured instant when created, and are often useful for tasks such as measuring benchmarks or timing how long an operation takes.
That sounds interesting, let's see how we can use that instead of SystemTime:
use std::time::{Instant};
fn take_along_time() {
let mut _total = 0;
for _i in 0..10000 {
for _j in 0..5000 {
_total += 1;
}
}
}
fn main() {
let start = Instant::now();
take_along_time();
let elapsed_time = start.elapsed().as_millis();
println!("The time is probably {}ms", elapsed_time);
}
Would you look at that - we get the same result, but with less variables and, in my humble opinion, a bit cleaner and easier to read code.
Wait, there is more
That's all great and stuff, but there's actually more to our result than what we are seeing right now. With a small change, we can get a far more granular view of what the time to execute was:
use std::time::{Instant};
fn take_along_time() {
let mut _total = 0;
for _i in 0..10000 {
for _j in 0..5000 {
_total += 1;
}
}
}
fn main() {
let start = Instant::now();
take_along_time();
let elapsed_time = start.elapsed().as_micros();
println!("The time is probably {} microseconds", elapsed_time);
}
If you run the above program, you'll see that it is reporting a far more interesting number, with more accuracy than we had before. Instead of milliseconds, we now get microseconds, which is going to be useful for our next exercise.
Our results are misleading
Running the program over and over, you see quite a bit of variance in the results. There are so many factors involved in timing on a computer, and our approach is anything but scientific. With that being said, there's still some small things we can do to help get a better idea of the average execution time for our function, with average being the operative word here.
What if we executed our function a number of times, and then averaged the results? That will give us a better idea of the execution speed of our function and help smooth out the differences we see between runs.
use std::time::{Instant};
fn take_along_time() {
let mut _total = 0;
for _i in 0..10000 {
for _j in 0..5000 {
_total += 1;
}
}
}
fn main() {
let mut average_execution_time :u128 = 0;
for _c in 0..5 {
let start = Instant::now();
take_along_time();
average_execution_time += start.elapsed().as_micros();
}
average_execution_time = average_execution_time / 5;
println!("The average execution time is {} microseconds", average_execution_time);
}
If we run this, it takes a while, and then spits out an average!
I executed it five different times on my machine:
- 2035040
- 2026735
- 2018388
- 2024780
- 2028695
Even with the averaging, there's quite a bit of variance, but it's in microseconds, so I expect some things like how busy the computer is etc. to have this kind of effect.
Changing the loop to repeat 15 times, yields the result of 2018322 microseconds. Interesting!
Conclusion
This is by no means a scientific approach to benchmarking how long a function takes to execute, but it sure is handy to run and make sure that your changes do not negative impact the application!
Return to the blog index