Compile Time Feature Flags in Rust: Why, How, and When?

Posted on 2022-12-14

And what’s the tradeoff?

The ability to pick compile time features in Rust can improve your code's performance, size, maintainability, safety, and portability.

Below are a few arguments for why you should proactively use features when consuming dependencies and offer those to other library users.

Performance

Using feature flags in Rust can improve the performance of the resulting code. By only including the code that is needed for a specific application, you can avoid the overhead of unused or unnecessary code.

Though there are compiler optimizations to remove dead code, still, this can result in faster and more efficient programs (and make the compiler’s life easier).

Size

The overall size of the resulting binary is influenced by which dependencies you include and how you use them.

Feature selection can help the resulting binary be smaller, benefiting applications that need to be distributed or deployed to resource-constrained environments.

Maintainability

I’ve recently had a breaking upstream dependency, where I was lucky enough to have that upstream breaking code under a feature flag — for a feature I was not using.

While waiting for the upstream library to update, I removed the feature for my local project, which was building fine again. This means you can improve Rust code's maintainability by allowing developers to include or exclude specific functionality selectively.

Security

Statistically speaking — the more code you depend on, the higher the chance of a security issue. Depending on only the features you need to lower the odds of a security issue is security-by-design thinking, and a crate that offers itself “in chunks” is helping that happen.

There are also ways to select different implementations of the same functionality based on how comfortable you are with the safety of an implementation. For example, you might prefer a Rust-native TLS implementation over a C-based one because Rust is a safe language, and some crates like Reqwest offer a selection of TLS backends.

Portability

As a compiled language, an important aspect of feature flags is improving your code's portability.

You can selectively include or exclude specific functionality to make your code more portable across different platforms and environments.

How Does C/C++ Compare?

C and C++ have historically been the archetypes of compiled portable code deployed to many platforms and CPU architectures.

C++ does not have a built-in feature directly equivalent to the ability to pick compile time features in Rust. However, C++ does have a number of preprocessor directives that can be used to include or exclude certain code at compile time selectively.

This can provide some of the same benefits as feature flags in Rust. Still, it’s messy and hard to discover — both as a programmer looking to build into an existing codebase and as a consumer looking to enable or disable features.

Feature Flags: The Building Blocks

To enable a specific feature flag for a specific crate, you can use the default-features = false and features attributes in the crate's Cargo.toml file.

For example:

[dependencies]
my-crate = { default-features = false, features = ["my-feature"] }

To enable a feature flag for a specific piece of code, you can use the #[cfg(feature = "my-feature")] attribute. For example:

#[cfg(feature = "my-feature")]
fn my_function() {
    // Code that is only included when the "my-feature" flag is enabled
}

To enable a feature flag for a specific module, you can use the #[cfg(feature = "my-feature")] attribute on the mod declaration. For example:

#[cfg(feature = "my-feature")]
mod my_module {
    // Code that is only included when the "my-feature" flag is enabled
}

To enable a feature flag for a specific struct or enum with derive, you can use the #[cfg_attr(feature = "my-feature", derive(...))] attribute. For example:

#[cfg_attr(feature = "my-feature", derive(Debug, PartialEq))]
struct MyStruct {
    // Fields and methods that are only included when the "my-feature" flag is enabled
}

Here’s how to enable or disable support for a specific platform:

#[cfg(target_os = "linux")]
mod linux_specific_code {
    // Linux-specific code goes here...
}

And how to enable or disable a specific implementation of a trait:

#[cfg(feature = "special_case")]
impl MyTrait for MyType {
    // Implementation of trait for special case goes here...
}

How to enable or disable a specific test case:

#[cfg(feature = "expensive_tests")]
#[test]
fn test_expensive_computation() {
    // Test that performs expensive computation goes here...
}

Here’s the code to enable or disable a specific benchmark:

#[cfg(feature = "long_benchmarks")]
#[bench]
fn bench_long_running_operation(b: &mut Bencher) {
    // Benchmark for a long-running operation goes here...
}

To enable a feature only when multiple flags are set, you can use the #[cfg(all(feature1, feature2, ...))] attribute. For example, to enable a my_function() only when both the my_feature1 and my_feature2 flags are set:

#[cfg(all(feature = "my_feature1", feature = "my_feature2"))]
fn my_function() {
    // code for my_function
}

To enable a feature only when one of the multiple flags is set, you can use the #[cfg(any(feature1, feature2, ...))] attribute. For example, to enable a my_function() when either the my_feature1 or my_feature2 flag is set:

#[cfg(any(feature = "my_feature1", feature = "my_feature2"))]
fn my_function() {
    // code for my_function
}

Feature Flags Illustrated

Same module, but point to a different path for implementation, then pull out a function to expose from that module with pub use.

//! Signal monitor
#[cfg(unix)]
#[path = "unix.rs"]
mod imp;
#[cfg(windows)]
#[path = "windows.rs"]
mod imp;
#[cfg(not(any(windows, unix)))]
#[path = "other.rs"]
mod imp;
pub use self::imp::create_signal_monitor;

See https://github.com/shadowsocks/shadowsocks-rust/blob/master/src/monitor/mod.rs

When different components have the same implementation: you can offer everything under the sun without any disadvantage because only the features selected get compiled in.

The tradeoff is that now you have a bigger test matrix, which grows combinatorially with every new alternative.

In this example, the library lets you pick any allocator you can think of because allocators have a well-defined interface and require no work on your part to swap:

//! Memory allocator
#[cfg(feature = "jemalloc")]
#[global_allocator]
static ALLOC: jemallocator::Jemalloc = jemallocator::Jemalloc;
#[cfg(feature = "tcmalloc")]
#[global_allocator]
static ALLOC: tcmalloc::TCMalloc = tcmalloc::TCMalloc;
#[cfg(feature = "mimalloc")]
#[global_allocator]
static ALLOC: mimalloc::MiMalloc = mimalloc::MiMalloc;
#[cfg(feature = "snmalloc")]
#[global_allocator]
static ALLOC: snmalloc_rs::SnMalloc = snmalloc_rs::SnMalloc;
#[cfg(feature = "rpmalloc")]
#[global_allocator]
static ALLOC: rpmalloc::RpMalloc = rpmalloc::RpMalloc

In this example, you see how to let your users “layer in” the functionality they need, where you can pick how much deeper you want to go:

//! Service launchers
pub mod genkey;
#[cfg(feature = "local")]
pub mod local;
#[cfg(feature = "manager")]
pub mod manager;
#[cfg(feature = "server")]
pub mod server;

In the below example, you can use blocks to “artificially” scope in entire pieces of code under a feature:

#[cfg(feature = "local-tunnel")]
{
    app = app.arg(
        Arg::new("FORWARD_ADDR")
            .short('f')
            .long("forward-addr")
            .num_args(1)
            .action(ArgAction::Set)
            .requires("LOCAL_ADDR")
            .value_parser(vparser::parse_address)
            .required_if_eq("PROTOCOL", "tunnel")
            .help("Forwarding data directly to this address (for tunnel)"),
    );
}

In this example, we inline empty implementations because why pay the price of a function call if its body always returns a simplistic and empty-ish value? ( Ok(()).

#[cfg(all(not(windows), not(unix)))]
#[inline]
fn set_common_sockopt_after_connect_sys(_: &tokio::net::TcpStream, _: &ConnectOpts) -> io::Result<()> {
    Ok(())
}

Last but Not Least: What’s the Tradeoff?

If features are so powerful and shed away a lot of C/C++’s primitive ways of doing conditional code compilation, why not use it everywhere and always? Here are a few things you should consider.

  • Using too many features is a real thing. In the imaginary and extreme case, imagine you had a feature on every module and function. That would require your consumers to solve a very hard puzzle of understanding how to compose your library from its discrete features. This is the danger of features. You want to be modest with the number of features you’re offering to reduce a cognitive load and for those features to be things people care about removing or adding.
  • Testing is another big deal with features. You never know which combination of features your users will select, and every combination selects a different set of code — and those code pieces have to interoperate smoothly both in the compilation (successfully compile). In logic (not introduce bugs), you need to test a combination of all features with every other feature and create a powerset of features!
  • You can automate that with xtaskops::powerset— see more here: https://github.com/jondot/xtaskops.