Tips & tricks

Generics

Resources shared between two or more tasks implement the Mutex trait in all contexts, even on those where a critical section is not required to access the data. This lets you easily write generic code that operates on resources and can be called from different tasks. Here's one such example:


# #![allow(unused_variables)]
#fn main() {
//! examples/generics.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

extern crate panic_semihosting;

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;
use rtfm::Mutex;

#[rtfm::app(device = lm3s6965)]
const APP: () = {
    static mut SHARED: u32 = 0;

    #[init]
    fn init(_: init::Context) {
        rtfm::pend(Interrupt::UART0);
        rtfm::pend(Interrupt::UART1);
    }

    #[interrupt(resources = [SHARED])]
    fn UART0(c: UART0::Context) {
        static mut STATE: u32 = 0;

        hprintln!("UART0(STATE = {})", *STATE).unwrap();

        advance(STATE, c.resources.SHARED);

        rtfm::pend(Interrupt::UART1);

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[interrupt(priority = 2, resources = [SHARED])]
    fn UART1(mut c: UART1::Context) {
        static mut STATE: u32 = 0;

        hprintln!("UART1(STATE = {})", *STATE).unwrap();

        // just to show that `SHARED` can be accessed directly and ..
        *c.resources.SHARED += 0;
        // .. also through a (no-op) `lock`
        c.resources.SHARED.lock(|shared| *shared += 0);

        advance(STATE, c.resources.SHARED);
    }
};

fn advance(state: &mut u32, mut shared: impl Mutex<T = u32>) {
    *state += 1;

    let (old, new) = shared.lock(|shared| {
        let old = *shared;
        *shared += *state;
        (old, *shared)
    });

    hprintln!("SHARED: {} -> {}", old, new).unwrap();
}

#}
$ cargo run --example generics
UART1(STATE = 0)
SHARED: 0 -> 1
UART0(STATE = 0)
SHARED: 1 -> 2
UART1(STATE = 1)
SHARED: 2 -> 4

This also lets you change the static priorities of tasks without having to rewrite code. If you consistently use locks to access the data behind shared resources then your code will continue to compile when you change the priority of tasks.

Conditional compilation

You can use conditional compilation (#[cfg]) on resources (static [mut] items) and tasks (fn items). The effect of using #[cfg] attributes is that the resource / task will not be available through the corresponding Context struct if the condition doesn't hold.

The example below logs a message whenever the foo task is spawned, but only if the program has been compiled using the dev profile.


# #![allow(unused_variables)]
#fn main() {
//! examples/cfg.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

extern crate panic_semihosting;

#[cfg(debug_assertions)]
use cortex_m_semihosting::hprintln;

#[rtfm::app(device = lm3s6965)]
const APP: () = {
    #[cfg(debug_assertions)] // <- `true` when using the `dev` profile
    static mut COUNT: u32 = 0;

    #[init]
    fn init(_: init::Context) {
        // ..
    }

    #[task(priority = 3, resources = [COUNT], spawn = [log])]
    fn foo(c: foo::Context) {
        #[cfg(debug_assertions)]
        {
            *c.resources.COUNT += 1;

            c.spawn.log(*c.resources.COUNT).ok();
        }

        // this wouldn't compile in `release` mode
        // *resources.COUNT += 1;

        // ..
    }

    #[cfg(debug_assertions)]
    #[task]
    fn log(_: log::Context, n: u32) {
        hprintln!(
            "foo has been called {} time{}",
            n,
            if n == 1 { "" } else { "s" }
        )
        .ok();
    }

    extern "C" {
        fn UART0();
        fn UART1();
    }
};

#}

Running tasks from RAM

The main goal of moving the specification of RTFM applications to attributes in RTFM v0.4.0 was to allow inter-operation with other attributes. For example, the link_section attribute can be applied to tasks to place them in RAM; this can improve performance in some cases.

IMPORTANT: In general, the link_section, export_name and no_mangle attributes are very powerful but also easy to misuse. Incorrectly using any of these attributes can cause undefined behavior; you should always prefer to use safe, higher level attributes around them like cortex-m-rt's interrupt and exception attributes.

In the particular case of RAM functions there's no safe abstraction for it in cortex-m-rt v0.6.5 but there's an RFC for adding a ramfunc attribute in a future release.

The example below shows how to place the higher priority task, bar, in RAM.


# #![allow(unused_variables)]
#fn main() {
//! examples/ramfunc.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

extern crate panic_semihosting;

use cortex_m_semihosting::{debug, hprintln};

#[rtfm::app(device = lm3s6965)]
const APP: () = {
    #[init(spawn = [bar])]
    fn init(c: init::Context) {
        c.spawn.bar().unwrap();
    }

    #[inline(never)]
    #[task]
    fn foo(_: foo::Context) {
        hprintln!("foo").unwrap();

        debug::exit(debug::EXIT_SUCCESS);
    }

    // run this task from RAM
    #[inline(never)]
    #[link_section = ".data.bar"]
    #[task(priority = 2, spawn = [foo])]
    fn bar(c: bar::Context) {
        c.spawn.foo().unwrap();
    }

    extern "C" {
        fn UART0();

        // run the task dispatcher from RAM
        #[link_section = ".data.UART1"]
        fn UART1();
    }
};

#}

Running this program produces the expected output.

$ cargo run --example ramfunc
foo

One can look at the output of cargo-nm to confirm that bar ended in RAM (0x2000_0000), whereas foo ended in Flash (0x0000_0000).

$ cargo nm --example ramfunc --release | grep ' foo::'
20000100 B foo::FREE_QUEUE::ujkptet2nfdw5t20
200000dc B foo::INPUTS::thvubs85b91dg365
000002c6 T foo::sidaht420cg1mcm8
$ cargo nm --example ramfunc --release | grep ' bar::'
20000100 B bar::FREE_QUEUE::lk14244m263eivix
200000dc B bar::INPUTS::mi89534s44r1mnj1
20000000 T bar::ns9009yhw2dc2y25

binds

You can give hardware tasks more task-like names using the binds argument: you name the function as you wish and specify the name of the interrupt / exception in the binds argument. Types like Spawn will be placed in a module named after the function, not the interrupt / exception. Example below:


# #![allow(unused_variables)]
#fn main() {
//! examples/binds.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

extern crate panic_semihosting;

use cortex_m_semihosting::{debug, hprintln};
use lm3s6965::Interrupt;

// `examples/interrupt.rs` rewritten to use `binds`
#[rtfm::app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        rtfm::pend(Interrupt::UART0);

        hprintln!("init").unwrap();
    }

    #[idle]
    fn idle(_: idle::Context) -> ! {
        hprintln!("idle").unwrap();

        rtfm::pend(Interrupt::UART0);

        debug::exit(debug::EXIT_SUCCESS);

        loop {}
    }

    #[interrupt(binds = UART0)]
    fn foo(_: foo::Context) {
        static mut TIMES: u32 = 0;

        *TIMES += 1;

        hprintln!(
            "foo called {} time{}",
            *TIMES,
            if *TIMES > 1 { "s" } else { "" }
        )
        .unwrap();
    }
};

#}
$ cargo run --example binds
init
foo called 1 time
idle
foo called 2 times

Indirection for faster message passing

Message passing always involves copying the payload from the sender into a static variable and then from the static variable into the receiver. Thus sending a large buffer, like a [u8; 128], as a message involves two expensive memcpys. To minimize the message passing overhead one can use indirection: instead of sending the buffer by value, one can send an owning pointer into the buffer.

One can use a global allocator to achieve indirection (alloc::Box, alloc::Rc, etc.), which requires using the nightly channel as of Rust v1.34.0, or one can use a statically allocated memory pool like heapless::Pool.

Here's an example where heapless::Pool is used to "box" buffers of 128 bytes.


# #![allow(unused_variables)]
#fn main() {
//! examples/pool.rs

#![deny(unsafe_code)]
#![deny(warnings)]
#![no_main]
#![no_std]

extern crate panic_semihosting;

use cortex_m_semihosting::{debug, hprintln};
use heapless::{
    pool,
    pool::singleton::{Box, Pool},
};
use lm3s6965::Interrupt;
use rtfm::app;

// Declare a pool of 128-byte memory blocks
pool!(P: [u8; 128]);

#[app(device = lm3s6965)]
const APP: () = {
    #[init]
    fn init(_: init::Context) {
        static mut MEMORY: [u8; 512] = [0; 512];

        // Increase the capacity of the memory pool by ~4
        P::grow(MEMORY);

        rtfm::pend(Interrupt::I2C0);
    }

    #[interrupt(priority = 2, spawn = [foo, bar])]
    fn I2C0(c: I2C0::Context) {
        // claim a memory block, leave it uninitialized and ..
        let x = P::alloc().unwrap().freeze();

        // .. send it to the `foo` task
        c.spawn.foo(x).ok().unwrap();

        // send another block to the task `bar`
        c.spawn.bar(P::alloc().unwrap().freeze()).ok().unwrap();
    }

    #[task]
    fn foo(_: foo::Context, x: Box<P>) {
        hprintln!("foo({:?})", x.as_ptr()).unwrap();

        // explicitly return the block to the pool
        drop(x);

        debug::exit(debug::EXIT_SUCCESS);
    }

    #[task(priority = 2)]
    fn bar(_: bar::Context, x: Box<P>) {
        hprintln!("bar({:?})", x.as_ptr()).unwrap();

        // this is done automatically so we can omit the call to `drop`
        // drop(x);
    }

    extern "C" {
        fn UART0();
        fn UART1();
    }
};

#}
$ cargo run --example binds
bar(0x2000008c)
foo(0x20000110)

Inspecting the expanded code

#[rtfm::app] is a procedural macro that produces support code. If for some reason you need to inspect the code generated by this macro you have two options:

You can inspect the file rtfm-expansion.rs inside the target directory. This file contains the expansion of the #[rtfm::app] item (not your whole program!) of the last built (via cargo build or cargo check) RTFM application. The expanded code is not pretty printed by default so you'll want to run rustfmt over it before you read it.

$ cargo build --example foo

$ rustfmt target/rtfm-expansion.rs

$ tail -n30 target/rtfm-expansion.rs
#[doc = r" Implementation details"]
const APP: () = {
    use lm3s6965 as _;
    #[no_mangle]
    unsafe fn main() -> ! {
        rtfm::export::interrupt::disable();
        let mut core = rtfm::export::Peripherals::steal();
        let late = init(
            init::Locals::new(),
            init::Context::new(rtfm::Peripherals {
                CBP: core.CBP,
                CPUID: core.CPUID,
                DCB: core.DCB,
                DWT: core.DWT,
                FPB: core.FPB,
                FPU: core.FPU,
                ITM: core.ITM,
                MPU: core.MPU,
                SCB: &mut core.SCB,
                SYST: core.SYST,
                TPIU: core.TPIU,
            }),
        );
        core.SCB.scr.modify(|r| r | 1 << 1);
        rtfm::export::interrupt::enable();
        loop {
            rtfm::export::wfi()
        }
    }
};

Or, you can use the cargo-expand subcommand. This subcommand will expand all the macros, including the #[rtfm::app] attribute, and modules in your crate and print the output to the console.

$ # produces the same output as before
$ cargo expand --example smallest | tail -n30