DannyBee 4 days ago

This author seems like they are expecting an RTOS to be the same as Arduino environment, or at least, susceptible to just hacking around and hoping it works. Most are not.

Given a lot of Arduino these days have mbed or freertos under the covers, with some way to expose it, that may have been a better route for the author's style.

Zephyr is easy to use (Clion also has good support for it), but like, you can't just choose not to install the toolchain and expect it to all work.

It also definitely supports the Pi Pico - i've used it on it before, no issues.

A simpler rundown of these RTOSen would be:

1. FreeRTOS - supported by roughly everything, but drivers and such are mostly per-SOC/device, which is a pain in the ass. The APIs are not very user friendly, but you get used to it.

To give a sense of level - if you wanted to use bluetooth with FreeRTOS, you get to find your own stack.

2. Zephyr - supports real hardware abstractions and supports most SOC. You may have to do a little board work.

If you wanted to use bluetooth with Zephyr, it has a bluetooth stack you can use, you may have to add a little support for your HCI.

3. NuttX - not great support, but very cool if you can get it working - not really highly supported by industry yet.

I never got far enough into NuttX to try bluetooth in NuttX.

There is also mbed, but we'll skip it.

In practice, people in the RTOS world will usually go with what the SOC vendor supports. So if you are using Nordic stuff, you are probably on Zephyr. If you are using NXP stuff, probably FreeRTOS, etc.

That is how they get good support.

  • bborud 3 days ago

    Quick question: do you develop for prototyping boards or do you write firmware for OEM devices?

    I have never encountered a single project where we had to do firmware for an OEM device where developers didn't struggle with Zephyr. Not once. Nor have I actually met any of those mythical developers that do actual firmware work on shipping products who think Zephyr hardware abstractions actually help. I'm not denying they exist. I just haven't encountered any in the past 5 or so years.

    • DannyBee 3 days ago

      Prototyping boards - I can totally believe trying to do firmware on a shipping product with real constraints in zephyr would be a pain in the ass because you don't have enough control.

      I often use C++ with zephyr, which also pretty much immediately disqualifies me from that club :)

      Though i've done it plenty of times with C only and nordic boards.

      Amusingly, I got into it years ago because I was porting my dust collector controller to an RTOS, and it was very easy to port to zephyr (I had to hack up the C++ threading support, but the hardware was very easy to make work).

      But i've since used it for everything from the "the keypad and receiver that is on the gate to my house" to embedded devices that transmit flow data, etc. I have had to fix bugs in the hardware abstractions for LoRa (interrupt based packet reception did not work on all chips), and do things like add hardware acceleration for LVGL, but overall the abstractions work pretty well for me. Having had to build themselves in FreeRTOS, it's just not worth my time.

      I view it as more of a "less hacky arduino" than "serious thing i would base my life on".

      At most, i would use it for nordic boards to develop bluetooth stuff (because they have such good support for this).

      Anything that requires real serious guarantees i'll use PLC hardware, not zephyr. But i'm in the machine world, so this is like "make sure spindle has stopped moving before allowing tool release" type stuff that needs to be deterministic, but is also fairly simple. It does not require more than PLCOpen or ladder.

      • johnwalkr 3 days ago

        Glad you mentioned PLCs. I keep seeing more and more young engineers reinventing wheels with raspberry pi and arduinos, especially for things like test jigs. It’s almost like PLCs are a secret in some industries.

        • mardifoufs 3 days ago

          It's not that they are a secret. It's that they are sometimes cost prohibitive and lock you into a proprietary, rigid ecosystem regardless of which PLC you chose.

          PLCs are awesome for what they are intended for, and they work very well with each other/equipment made with PLCs in mind. but they are an immense pain to "integrate" into an existing non-industrial system. They are basically in their own world, with tooling that's specific to the PLC world, with very little open source support.

          I agree that for some one off things like controlling a CNC, they might be more useful than a Frankenstein esp attached to some servo controllee though.

          • DannyBee 3 days ago

            I agree on both counts.

            You can easily get locked in. For anyone else who is just sort of lurking (you already know what i'll say).

            Codesys is basically the standard here, and the way to "avoid" lock in - it has the most hardware support, and lots of vendors use it under the covers. This gives you some standard. It supports IEC 61131-3, happy to export it as text or whatever, and i've actually moved code between implementations.

            You can use Codesys Control RTE on anything that runs windows or linux to get soft-realtime support.

            Twincat is similar (it was based on codesys at one point but no longer).

            Integration is, as you say, a pain outside of industrial land.

            But i will admit i am amazed that i have an entire CNC machine built out of ethercat servo drives, I/O, VFD, vacuum pump, pneumatic valve actuators, limit switches, etc. They all are from different manufacturers. But damned if it doesn't work perfectly, and i only have an ethernet cable running between 99% of things where it used to require a metric ton of cables and you were just flinging bits and analog current around between things. 6 If i bothered to update the spindle to ethercat, the only real physical I/O would be brake power relays (unavoidable) and emergency stop. I do have one modbus thing (dust control flow monitor) i use as well.

            It also is pretty good at hiding complexity - I can link a variable to an I/O input or analog value or VFD status word, know that it will deterministically update. I can set up a structure of bits for each pneumatic valve and map it to the manufacturer's single I/O word and again, get deterministic two-way updating and not worry about it.

            Now, can i get status out of this thing? Well, no, to your point, either something else needs to speak modbus, ethercat, profinet, ethernet/ip, pure digital i/o to it.

            These days i could publish status to MQTT, but something like "expose an HTTP port that outputs a bunch of JSON" is totally uncommon, and will net you strange looks. It's like you are asking about cold fusion.

            Don't even get started on controlling it from the other side through something like that.

            But yeah, otherwise paying 500 bucks for a license to ladder program a PLC is not a pleasant thing.

            • vvanders 3 days ago

              It's not Ardunino cheap but I've been a fan of Automation Direct, they've got a couple fairly capable PLCs and they don't try and charge you on the software and/or support.

            • KANahas 2 days ago

              As someone with some EtherCat and CNC experience, what’s your interface from gcode to your CNC? Have you written custom software or is there a EtherCat compatible gcode sender I’m unaware of?

              • DannyBee 2 days ago

                There are actually many - almost every serious (IE 100k+) wood router is ethercat.

                Centroid (known for the Acorn board) released Hickory, which does ethercat servo drives but not full ethercat support. It runs the same CNC software as Acorn/et al.

                Vital Systems makes an ethercat motion controller that interfaces with Mach4. They let you use any ethercat device you have an ESI file for, and map the inputs/encoders/output types to Mach4 data of various sorts (digital inputs, analog inputs, encoders, etc).

                MachMotion also has an interface to Mach4, theirs is a soft realtime motion controller based on RSI's (very well known/used for robotics) motion planning.

                Those are the more standard ones.

                Twincat also has an NC interface that supports gcode but you'd have to make your own UI.

                LinuxCNC can do ethercat, but i've never considered it :)

                There are also more standard hardware solutions. Syntec's hardware controller can do ethercat, etc.

            • mardifoufs 3 days ago

              Yeah honestly, I'm in awe of how... weirdly well PLCs work? Yes they aren't exactly super complex usually, but they just work together. You can know exactly what your Plc will work with, you can see what it does in its vendor supplied GUI, it can be basically plug and play.

              In a way that would be impossible without the insular ecosystem that the PLC world has, but that really doesn't matter considering how well they do the job.

              And yeah, I think the reason why it hides complexity pretty well is that they are meant to be field repairable by regular technicians who don't necessarily know a lot about the underlying systems. That also makes it super easy to use... once you set up everything that is haha.

              Thank you for the pointers, I heard about codesys but I always assumed that vendors all used their own proprietary islands of standards and only paid lip service to Interop. I'm only passably familiar with Siemens PLCs so not super knowledgeable either!

      • bborud 3 days ago

        Have you tried ESP32 and ESP-IDF (FreeRTOS?). I'd be interested in your take on that after using Zephyr :-)

        • DannyBee 3 days ago

          I have tried ESP32 a bunch, ESP-IDF itself less.

          I mainly used ESP32 through platformio (which used the arduino compatibility layer of ESP-IDF), and then through the rust support.

          I did use ESP-IDF directly for one project, but only for a small time. For that application, it was just way too high power and too finicky (it was easy to crash it by doing normal things - admittedly, it could be that there are just a lot of crappy ESP32 boards out there)

          I did find ESP-IDF a lot nicer than either platformio, or the rust support, for sure. It was nice having regular old cmake and stuff that just worked for you, without worrying about it. For a simple app, i would use it over zephyr, no issue.

          But zephyr's abstractions have bought me stuff. For example, I have learned how to make zephyr's MCU management subsystem do OTA over bluetooth across any device that supports mcuboot (IE everything). So like, i don't worry about needing to use each manufacturer's OTA support. It also works with nuttx if i ever go that way. It's hard to give stuff like that up for my use cases. I can walk near my gate transmitter/receiver, or near a machine, or near anything else i've built, and using the same tools, make sure the device is working and update the device with an image, without needing to worry about or keep 75 types of OTA tools and keys around :)

          (I could make it work over wifi or lora or ... easily too, but i don't really need to).

          These devices do not all use the same SOC - the gate transmitter is a very low power sleep device (small battery will last about 21 years at current rate). The gate receiver is not. The embedded sensors on machines are yet a different SOC (nordic), with other types needing a little more power being higher power STM's.

          There is even an NXP device around somewhere from when i was experimenting with LVGL.

          It is possible to make this kind of thing work with freertos (they have a 3 year old not-updated labs project to support mcumgr, for example). But i don't want to have to combine my own pieces, etc. This, of course, is basically the point of FreeRTOS, so it's not a surprise i don't go that way :)

          ESP-IDF obviously provides it's own OTA layer, though it leaves transmitting data and management to you. I did build an OTA over bluetooth impl that worked pretty well (updates at about 50-200k/s) before i moved entirely to the zephyr one.

          • dayjaby 3 days ago

            Do you by chance have an example repo with mcu management working? I always struggled with that in Zephyr :)

    • TickleSteve 3 days ago

      Agree...

      As an embedded developer for over 25 years, I would never use the provided hardware abstractions from one particular vendor. Typically, you use your own abstraction layer over multiple vendors BSPs/OS/libraries.

      One of the driving principles behind embedded software is minimalism because you pay real money for every byte and cycle. That leads you to creating minimal abstractions for your particular use case that still gives you the flexibility that is also often required (multiple sources for processors for example).

      • londons_explore 3 days ago

        In the VC/startup world here in HN, I suspect lots of embedded engineering won't be highly cost constrained, and picking a microcontroller with 1MByte of flash when the task could have been done with 1 Kbyte of flash will be seen as no big deal.

        In turn, that means a focus on premade software to reduce Dev time rather than optimizing every byte.

      • j-krieger 2 days ago

        Eh. We mostly went with IDF since we run all our products on ESP boards. I really like their abstraction layers.

    • 5ADBEEF 3 days ago

      anybody doing new BLE products has a 50/50 shot of using Zephyr in current year. I think the real benefit of Zephyr is that the wheel has already been invented, no need to do it yourself

  • anymouse123456 3 days ago

    "Zephyr is easy to use"

    That has not been my experience.

    From where I sit, Zephyr has very pretty marketing material.

    Behind all that snazz, Zephyr is outrageously bloated, extremely slow to compile and wildly difficult to get up and running.

    • DannyBee 3 days ago

      I dunno, maybe i got lucky. It takes me maybe 10 minutes to get clion + zephyr started and working for a new project.

      If there is board work, at that point i'm playing with making the board work.

      I started with platformio's zephyr support and then moved to clion as platformio kind of died.

      I have never, and would never, try to use west as the main build system of my project directly.

      As for bloat/compilation speed, i guess it all depends on what you are trying to use it for. As per the other thread - if you are trying to use it as "an RTOS that is a less hacky version of arduino stuff", i think it works great.

      If you are trying to use it for "i have meaningful hardware constraints and need to keep track of every byte", i doubt it would work well compared to FreeRTOS.

      I think the limit is probably "i am using nordic boards to develop bluetooth stuff".

      I do agree they try to sell it for a lot of things.

      • anymouse123456 3 days ago

        Thanks for the thoughtful response. I was really frustrated by Zephyr, but I'm glad to see it's helping some folks.

      • einsteinx2 3 days ago

        What do you mean by “platformio kind of died”? I used to use it a lot but haven’t done any microcontroller projects in a while.

        • bborud 3 days ago

          I can't speak to what he meant, but if I were to guess, Zephyr isn't necessarily that easy to wrangle into shape for PlatformIO so despite a lot of initial enthusiasm and optimism, progress has been a bit disappointing.

          As for why, it is probably because MCU makers are a bit nearsighted when it comes to the software side so you get little or no help from there. The strategic decision makers don't understand software. (That's not my assessment, by the way, but that of people I know who either work for one of the major MCU makers or has worked there).

          So they do their own thing, they don't do it particularly well, they don't see that they don't do it particularly well or why it is even their problem. I also think that they are a bit afraid of common tooling - possibly because they think it is giving up control or robbing them of the opportunity to differentiate themselves (which is tragically funny since most can, at best, hope to be no worse than the competition).

          You'd think that if lots of players can agree on using Zephyr it shouldn't be hard to make them agree on supporting sensible common tooling (akin to PlatformIO), but then again, these companies don't really get software.

          (I was tempted to say "modern tooling", but then I remembered that I hate it when people use the word "modern" so Dobby had to iron his hands and delete it).

        • DannyBee 3 days ago

          Most manufacturers seem to have abandoned trying to support them, their community has slowly disappeared, and you can see it.

          They seem to only care about Espressif these days.

          Zephyr support has not been updated in ages (2021)

          Patches submitted to platformio arduino to add Giga R1 support have gone unreviewed for years (I get poked every so often because i did some work on it, so i see the github comments).

          The only things that seem to be in okay shape these days are ESP and STM32.

          Given the other options at this point, it is nowhere near as useful as it used to be.

          • Kubuxu 3 days ago

            That is partially on PlatformIO. AFAIK, they started pushing hard the scheme "be a partner (meaning pay us) or we won't even accept external contributions to support your platform".

            ESP had a pretty public dispute with them (payment wasn't explicitly mentioned but can be inferred).

            • DannyBee 2 days ago

              Yeah, they thought they controlled the eyeballs, and it turns out they did not.

          • einsteinx2 2 days ago

            Ah that’s unfortunate, I liked the idea of Platformio.

  • shadowpho 3 days ago

    Why skip mbed? That’s the “best” part!

bborud 4 days ago

Having toolchains installed system-wide in the traditional UNIX way is painful, and dare I say it, not the smartest of approaches. If it works for you: great, but if you work on a project with multiple developers, sometimes working on multiple projects that have different targets, you will spend a lot of time trying to figure out build and configuration problems.

It also doesn't help that people keep using Python for tooling. Why would you want to insist on using a language that brings its own versioning problems and which will behave differently on each developer's computer? I've done embedded development for about a decade now (both as a hobby and professionally) and it is puzzling that people think it is okay to spend a week trying to make everyone's setup do the same thing on a project and then not see how this is a problem.

It is a problem. It is annoying. It does waste time. It is unnecessary.

Tools should be statically linked binaries. I don't care what language people use to write tools, be it Rust, Go, C, C++. I just wish people would stop prioritizing ad-hoc development and start to make robust tools that can be trusted to work the same way regardless of what's installed on your computer. Python doesn't do that. And it doesn't help that people get angry and defensive instead of taking this a bit more seriously.

That being said, things like PlatformIO are a step in the right direction. I know it is a Python project (and occasionally that turns out to be a problem, but less often that for other tools), but they have the right idea: toolchains have to be managed, SDKs have to be managed, libraries have to be managed, project configuration has to be simple, builds have to be reproducible and they have to be reproducible anywhere and anytime.

I wish more of the embedded industry would realize that it would be better to have some common efforts to structure things and not always invent their own stuff. I know a lot of people who work for some of the major MCU manufacturers and I am always disheartened when I talk to them because they tend to be very near-sighted: they are mostly busy trying to solve their own immediate problems and tend to not have a great deal of focus on developers' needs.

  • bschwindHN 3 days ago

    I refuse to use anything with python tooling now. I just don't. It is just so extremely rare that things work on the first try with it.

    I have a keyboard project that runs on an RP2040, and the firmware is in Rust. Here are the steps to flash it if you're starting with just the repo and no Rust toolchain:

          (install rustup)
        $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
        $ rustup target add thumbv6m-none-eabi
        $ cargo install elf2uf2-rs
        * Put the keyboard in bootloader mode *
        $ cargo run --release
    
    This is relying on rustup and Cargo to do the heavy lifting of toolchain management and building, which they are fantastic at. Python projects don't even come close.
    • bborud 3 days ago

      That looks quite lovely. I really hope that MCU manufacturers invest in Rust and that they both manage to do so in mechanical sympathy with the Rust community and that they realize they have to cooperate on common tooling.

    • franga2000 3 days ago

      You're comparing the "traditional Linux approach" to curl-pipe-bash, not comparing the actual environments/toolchains.

      Platformio: curl https://.../get-platformio.py | python3 && pio run

      Pipx: apt install pipx && pipx [build-script.py]

      Pipenv: pip install --user pipenv pyenv && pipenv run [build-script.py]

      • bschwindHN 2 days ago

        As mentioned already, it's not about the method of installation. In my experience python-based tools are slower and buggier than their native counterparts, and that's only after going through the installation process.

        Tools written in C / C++ / Go / Rust just don't have nearly as much fuss, and they run faster and are smaller too.

        • franga2000 2 days ago

          Go & Rust maybe, but I've had nothing but trouble with C/C++. Bigger projects usually have their build process well documented, but smaller thing usually give you an apt-get command for Ubuntu 18 and maybe a Makefile. Good luck running that even two years later on a different distro. The package names won't match and when you find them, the new version won't be compatible and the old version won't be in the repos, so you need to build that from scratch, recursively...

          • bborud a day ago

            It isn't so much that it is hard to build things - it is more that it is hard to distribute software for Linux because there are so many targets to build for. It is unreasonably hard to build binaries for distribution to more than one Linux distro. So people are disinclined to even try.

            You can, of course, build static binaries (and people really should do that more often) and distribute tarballs, but then you might run into all manner of licensing issues. (Not to mention the tedious discussion about upgrading shared libraries under the feet of applications and praying it doesn't introduce hard to detect breakages).

            One direction I'd love for distro designers to explore is if Linux distributions were to abandon the traditional splatterfest style of software installs, where an application scatters lots of files across several filesystems and depends on system wide libraries, and instead puts each application in a self-contained directory. Or rather, two: one for the immutable files (binaries, libraries etc) and one for variable state. That way you could choose if you want to do static linking or dynamic. Because any shared libraries could be part of the immutable application directory.

            Oh yes, it'll result in a lot more disk use, but disk drives are cheap, and libraries rarely make up that much of your disk use anyway. But there is a fair chance you could address some of that with filesystem level deduplication.

            You could also have systems for distributing shared libraries from trusted sources so applications do not necessarily have to include the shared libraries, but that they can be fetched as part of the install system. And then manage shared libraries locally with hardlinks or some form of custom filesystem trickery. This would have a few added benefits (opportunities for supply chain security, reducing size of distributed packages etc).

            macOS does this only partially, but applications still depend on shared libraries and you have the horror that is the Library directories which are somewhat randomly structured, resulting in lots of leftovers when you try to remove applications. macOS has some interesting ideas, but the execution of them is....inconsistent at best.

            The reason package managers don't work well is that they are trying to solve a lot of problems we shouldn't be having in the first place. And because nobody wants to simplify the problem they have to solve, distribution maintainers will keep writing package management systems that are a pain in the ass.

          • bschwindHN 2 days ago

            Building C projects from scratch usually isn't bad. I never trust debian-based distros to have anything up to date, so that's usually a lost cause anyway. C++ on the other hand...yeah those projects aren't often fun to build.

      • bborud 2 days ago

        They key isn't so much how you install the tools, since that's hopefully something you only do very occasionally, but the quality of life those tools provide once installed.

  • tpudlik 5 hours ago

    I don't think people choose Python because it's virtues as a language for tooling (it's not great from that perspective, statically linked binaries would be better). They choose it because of the ecosystem of libraries. For data analysis, visualization, and scientific computing, there are no comparable offerings in any other language. And a lot of this is stuff you _really_ don't want to be reimplementing from scratch, because it's very easy to introduce bugs (around numerical stability, say) that cause your software to produce correct results 99% of the time, and plausibly-looking but totally incorrect ones 1% of the time.

  • LtdJorge 3 days ago

    Wouldn't Docker (or other container tools) be a good candidate? Instead of installing the toolchains locally, do the build on a container based on an image with the exact toolchain you need.

    • bborud 3 days ago

      Yes, I used to do that for a couple of projects, but it wasn't as smooth as I had hoped. For instance I had a bunch of challenges getting USB stuff to work properly. And there were some tools that still required me to lug around a windows laptop for parts of the process.

      I'd much rather have proper tooling to begin with rather than have to pull out everyone's favorite roll of duct tape (Docker) to hide the mess.

    • amluto 3 days ago

      It is indeed possible to make a nice container that contains a program, offers a clean interface to that program, and doesn’t package a whole pile of largely irrelevant, guaranteed-to-be-quickly-outdated crap with it.

      But being possible doesn’t mean that anyone does it. Most containers I’ve ever seen are big messes. Possibly GPL-violating messes if the payload isn’t appropriately licensed. Certainly maintenance and provenance disasters. Unquestionably not reproducibly buildable and possibly not buildable at all.

      • yjftsjthsd-h 3 days ago

        > and doesn’t package a whole pile of largely irrelevant, guaranteed-to-be-quickly-outdated crap with it.

        > Possibly GPL-violating messes if the payload isn’t appropriately licensed. Certainly maintenance and provenance disasters. Unquestionably not reproducibly buildable and possibly not buildable at all.

        If static binaries are on the table then none of these things can possibly be disqualifiers.

    • vvanders 3 days ago

      Docker is great if you don't care about your build performance across a wide range of hosts, it's probably fine for small projects but I've seen incremental build times climb quickly where docker support wasn't the best(i.e. OSX).

      • nine_k 3 days ago

        You mean, where native Docker support is completely absent, and the whole thing has to rum in a VM?

      • petre 17 hours ago

        Well, OSX and Windows run it in a VM, so there you have it.

    • buescher 3 days ago

      For professional embedded development, where reproducible-in-the-future builds are frequently valued, it's common, when possible, to set up your whole toolchain in a vm that's kept under version control.

  • naitgacem 3 days ago

    Working with Android app developement has got me spoiled in this area. One needs to just run a command and everything is set up and works the exact same way whether be it locally or on CI with the same version of both build tools and the libraries. It still pains me when I go back to dealing with CMake and Make and trying to get libraries installed, as opposed to say, compile 'library-name-here'.

  • anymouse123456 3 days ago

    Just want to second this entire post. It feels like I wrote it myself.

  • Gibbon1 2 days ago

    I have a command line tool I wrote in C# back in 2011 and it just still works.

drrotmos 4 days ago

Personally, I've begun switching over my RP2040 projects to Rust using Embassy. Rust did take some getting used to, but I quite like it. Not an RTOS, but it satisfies a lot of the same requirements that'd lead you to an RTOS.

  • bschwindHN 4 days ago

    If you go this route, I would recommend starting off with the rp2040-hal crate and once you hit a pain point of managing multiple tasks, look into embassy or RTIC.

    Rust and Cargo take the pain out of building and flashing to the RP2040 (or STM32, for that matter) - it's the most pleasant embedded setup I've ever used.

  • lulf 3 days ago

    That is what I've observed too. My impression is that people go to RTOS for libraries and dependency management, which you kinda just get out of the box with Rust, cargo and crates.io.

    A lot of applications simply don't use the MPU. And then consider what you get from Rust memory safety and the reduction in overall complexity of the firmware without the RTOS.

  • stockhorn 4 days ago

    I can totally second this. Embedded rust in general has been an excellent experience for me. And async with embassy-executor works really well and takes a lot of pain off the whole rtos design process.

  • eloycoto 4 days ago

    100%, Embassy is great and I'm in love with it. If you add the PIO interface for RP2040 the setup makes code super simple and beatiful and difficult to achieve with another processor.

  • chappi42 4 days ago

    Wow that looks cool. Thanks for the mention!

  • lloydatkinson 4 days ago

    I believe this has come up before but what are some of the differences between RTIC and Embassy?

    • Katzenmann 4 days ago

      RTIC is really simple and doesn't use it's own HALs. Also it's macro system makes it hard to modularize your code since all tasks need to be in one module. I've played around with it a bit and it seems like it could be great in the future, but currently not really.

      Embassy has it's own HALs which makes it better at async and has also nicer ergonomic IMO

      • liamkinne 4 days ago

        Importantly for RP2040 users, RTIC 2.0.0 is currently single-core only.

        I’m using RTIC for the firmware on my STM32H7 based product (https://umi.engineering) and it has been a joy.

      • jamesmunns 3 days ago

        > Embassy has it's own HALs which makes it better at async and has also nicer ergonomic IMO

        Worth noting, you don't HAVE to use the embassy HALs with the embassy executor. However, AFAIK, the only mainstream non-embassy HALs that supports async is the ESP32 HALs.

        There's no technical lock in, it's just that everyone I've seen implementing async support (outside ESP32) tends to do it within the Embassy project today. That may change over time.

        • Katzenmann 2 days ago

          The Atsamd-rs dev has been working on async support without embassy as far as I know but I don't know if it's usable yet

    • lulf 3 days ago

      One of the things not immediately apparent for people coming to Embassy is that you can mix and match RTIC with the Embassy HALs. So the more appropriate comparison is RTIC vs the Embassy async executors.

5ADBEEF 4 days ago

the pi pico is 100% supported in Zephyr. https://github.com/zephyrproject-rtos/zephyr/tree/main/board... Did the author not check the docs? https://docs.zephyrproject.org/latest/boards/raspberrypi/rpi...

Additionally, you aren't intended (for many situations) to use a single "main" Zephyr install, but to include what external modules you need in your project's west.yml. If you have a number of projects sharing the same Zephyr install that's a separate discussion but installing every possible toolchain/HAL is not the only way to do things.

  • introiboad 3 days ago

    Also it should be trivial to build using the GNU Arm Embedded toolchain if the author did not want to install the Zephyr SDK, not sure why this did not work for them.

RossBencina 4 days ago

No mention of ThreadX, which is open source these days

https://github.com/eclipse-threadx/threadx/

  • chrsw 3 days ago

    I don't know why ThreadX isn't mentioned more frequently in these lists. It's relatively simple to use and understand.

    • bangaladore 3 days ago

      By and far my favorite RTOS from a source code and API perspective. It's just elegant.

      And it has a POSIX API layer if you really need that.

      • joezydeco 3 days ago

        I'm looking to William Lamie's next project after ThreadX, the PX5 RTOS. The POSIX API won't be an afterthought.

        https://px5rtos.com/

        • bangaladore 3 days ago

          Yes, I have seen this. Looks interesting.

          Its certainly targeting a more enterprise audience (much like ThreadX was originally)

          You have to pay to use it, price unknown other than maybe >= 5k$ per year?, and it looks like it will be prohibitively expensive (for my usecases at least) to have source access.

          To me, those make it nonstarters.

GianFabien 4 days ago

Great comparison of RTOS choices.

Personally I find microPython to be an easier path. async/await based cooperative multi-tasking works well for me. Latest project driving 6 stepper motors and a variety of LEDS and scanning buttons - all in what appears to be real-time to the user.

  • pjmlp 4 days ago

    These mini computers are much more powerful than the 8 and 16 bit home computers, microPython is more than capable to replace BASIC use cases from those early systems.

    It is still surprises me how many don't grasp how little we had available to us, and still managed to use high level languages.

  • nick__m 4 days ago

    For a PTZ controller (No hard real-time requirements, minimal application memory usage) microPython look like productive choice. When I tried microPython (on the esp32 but i assume that the rp2040 is equally supported) I was pleased by it's pythonic supports of interrupts handlers !

  • dekhn 3 days ago

    I have to assume in this case the stepper motor signals are generated by a hardware peripheral (like RMT on the ESP32), rather than using python code to generate the stepper signal? In my microscope, I run a FluidNC board that's doing near real-time stepper motor control, and control it with a lightweight serial protocol, but I'm looking at https://pypi.org/project/micropython-stepper/ which seems to be using python code with hardware timers.

    • GianFabien 3 days ago

      I was using 28BYJ-48 steppers driven using ULN2003A. Since the drive rate is 100-600 Hz, it was simple to use sub-ms timing to control signal edges. MicroControllers run 10-100s MHz, so microsecond timing is well within their capabilities. I haven't yet learnt to use PIOs.

      I'm old-school. Start by reading datasheets and then try out the simplest thing that could work and iterate from there. In general, I find third-party libraries more complex than my needs require.

      Took me a fair bit of reading microPython documentation and experimentation to fully grok the capabilities of the tasking model and event_loops. But once I got over that, it lead to clean, single responsibility class implementations.

    • dragontamer 3 days ago

      Id have to imagine that Stepper Motor control is a job for RP2040 PIO?

      I can't say that I'm too familiar with PIO but a tiny bit of state (literally a few bits) and a few bytes of code should be enough to make a good driver and offload the whole task from the CPUs.

      I know RP2040 has dual CPU but motor control is often the job of a timer on other uCs.

      • dekhn 3 days ago

        Hmm. Looking at https://github.com/redoxcode/micropython-stepper/blob/main/s... it looks like they basically run a fairly tight loop in Python using a Timer (which uses the hardware timer on ESP32 and RP2040, likely), poking the step pin in Python in the Timer callback. There is no specific timing for the step pulse, they just turn it on and then immediately off. I guess this works because the micropython implementation does fast attribute access and function calls.

        On a microcontroller, you could use PIO or I think the RMT peripheral or even just interrupt-driven pin poking.

        That's good for many use cases, although I strongly suspect it would fall over once you tried a very high step rate with microstepping (I run at 800-6400 step/sec) but I will try this out both for my lightweight test steppers (which use AccelStepper) as well as my microscope (which uses FluidNC). The scope will make it quite clear if we lose steps as I have a visual reference (the microscope slide has a calibrated fiducial marker).

  • matt_trentini 3 days ago

    Absolutely, this would be a straightforward MicroPython project :)

rkangel 3 days ago

I really want to get ve Hubris a try on a proper project (https://hubris.oxide.computer/reference/).

As an architectural approach it aligns closely with what I am going for in embedded land, but in C with more pain. And is not dissimilar to what you do in Erlang/Elixir in hosted land.

Embassy looks to be a good choice in more memory constrained situations where you can't afford multiple stacks.

boffinAudio 3 days ago

Always, always, always start your new embedded project in a Virtual Machine. NEVER MIX TOOLS ON THE SAME SYSTEM.

This has been the #1 cause of quality issues for my (commercial) projects.

If you start a project with a new chipset, with a new vendor - build a new VM, install the vendors tools (and only the vendors tools) in that VM, and do your builds from there.

Do your hacking on your own (non-VM) machine, sure. That's perfectly fine.

But ALWAYS use the VM to do your releases. And, for the love of FNORD, KEEP YOUR VM IN SYNC WITH YOUR DEV WORKSTATION.

Disclaimer: currently going through the immense pains of dealing with a firmware build that absolutely has to be fixed while the original dev is on vacation - nobody can get into their workstation, the VM prepared for the task is 6 months out of date, and the customer is wondering why they should pay for the whole team to do something that one very special programmer is the only one on the planet can do .. grr ..

  • matt_trentini 3 days ago

    Or, better still, use containers. Haven't used virtual machines in years - and I don't miss them one bit!

  • ptman 3 days ago

    Does nix and flakes help with a reproducible build environment?

  • jononor 3 days ago

    I feel your pain. Releases should be built by CI system, imo. Tag a release in git, and binaries plop out (after tests have passed).

sgt 4 days ago

Can't go wrong with FreeRTOS. It's basically an industry standard at this point.

  • alextingle 4 days ago

    What's the solution to his problem getting printf() to work?

    That does sound like a PIA.

    • retSava 3 days ago

      If I understood it correctly, it was that the compiler suite shipped with newlib, which have or can have printf supporting re-entrant use (several threads can call printf in async manner), or not supporting that case (typically sync usage, blocking across I/O). The re-entrant ones use malloc/free for, iiuc, a per-thread buffer.

      In many cases when you have smaller systems, you actually don't want your OS to do dynamic memory allocation, since it opens up for all kinds of bugs related to eg double-free, or use-after-free, or running out of heap, or too defragmented heap.

      It's also easier to calculate memory usage (and thus be fairly certain it will be enough) if you allocate as much of the memory as possible as static memory. Hence why some coding standards such as MISRA-C disallows use of dynamic memory allocation.

      Exactly what caused the lock-up here isn't delved deeper into, but it might be that there was not enough heap, or not a working malloc-implementation linked into the compiled binary (could be a stub), or could be that a non-re-entrant printf was linked in while they tried printf from several threads.

    • klrbz 3 days ago

      The Pico SDK has a plugin based stdio, they give you two implementations: UART and USB CDC, but you can add new ones by implementing a few callback functions. The UART plugin provided with the SDK can be disabled and replaced, the default implementation is very simplistic and will block.

      The USB CDC version is much faster but may not always be appropriate for your project.

      I implemented one that uses a buffer and an interrupt to feed the UART in the background, and then I disabled the plugin the Pico SDK provided. I'm not using an RTOS.

      The SDK example [uart_advanced](https://github.com/raspberrypi/pico-examples/blob/master/uar...) [github.com/raspberrypi] shows how to set up UART interrupts using the Pico SDK

    • not_the_fda 3 days ago

      Embedded is a PIA, its not web development and shouldn't be treated as such.

      Printf isn't re-entrant, and they are calling it from multiple threads.

      There are solutions with trade offs: https://interrupt.memfault.com/blog/printf-on-embedded

      Everything in embedded is a tradeoff of space, performance, or cost.

      If you come at it as web development with IO you are going to have a very bad time.

      • whiw 3 days ago

        > Printf isn't re-entrant, and they are calling it from multiple threads.

        This! Simple schedulers generally only allow system calls (such as printf) from the main thread. If you really want to 'print' from a child thread then send a message to the main thread, asking that it prints the message contents on on behalf of the child thread.

    • TickleSteve 4 days ago

      FreeRTOS has nothing to do with printf, thats a toolchain/standard library thing for his particular board.

      • rcxdude 4 days ago

        You will need your printf to be written with FreeRTOS in mind: it's often not re-entrant at all, let alone actually blocking on IO instead of busy-waiting.

        • TickleSteve 3 days ago

          true, my point being that "printf not working" is nothing to do with FreeRTOS.

    • roland35 3 days ago

      My best luck for printf is to use a segger debugger with RTT. It streams data over jtag or swd and is very fast

    • constantcrying 4 days ago

      Unlikely to be a software problem. If I had to guess the serial communication bus is misconfigured.

anymouse123456 3 days ago

I've had similar experiences to the OP.

I went ahead and rolled a simple green thread timer.

It doesn't support actual process management like a real kernel, and doesn't make any guarantees about anything, but it has gotten me further than bare metal scheduling and helped me avoid the shit show that is RTOSes.

Think JavaScript timer callbacks with an optional context struct (in C)

It has allowed me to interrogate a variety of sensors, process inbound signals, make control decisions and emit commands, all at various frequencies.

I highly recommend folks try something like this before ruining your life with these slow, abstract architectures.

roland35 3 days ago

Zypher is indeed a pain in the neck to learn. I have lots of experience with freertos, setting up tool chains, etc, but just getting a basic project with zephyr is confusing.

bangaladore 3 days ago

I think Eclipse ThreadX is a great option (https://github.com/eclipse-threadx/threadx)

Its might be the most used ThreadX on the market in the professional space (due to its history), but its quite frankly stupid simple to use.

It was initially developed by Express Logic, then partnered tightly with Renesas, Then sold to Microsoft, and then transferred to the Eclipse foundation for good.

They also provide NetX(duo), FileX, LevelX, UsbX and GuiX with full source and the same licensing afaik. Personally I don't care for UsbX or GuiX.

huppeldepup 4 days ago

No mention of ChibiOS, unfortunately. I checked and the port still has problems installing on flash, so runs on RAM only.

I’d also like to suggest uC/OS. It doesn’t take much more than setting a timer in asm to port that rtos but I haven’t had the time myself to try it.

  • stephen_g 3 days ago

    I really like ChibiOS, it’s HAL and drivers are really nice. I used it in a project back when it was GPL with a linking exception (basically LGPL except you can’t really do shared libraries on microcontrollers).

  • dTal 3 days ago

    I've heard of uC/OS and always wondered - is it pronounced "mucous"?

    • huppeldepup 3 days ago

      The author mentioned this somewhere in his book: competitors pronounced it mucous as a joke/jab. I believe it’s pronounced micro-cos as in microcosmos.

mordae 3 days ago

Rolling your own swapcontext() is not that hard on Cortex-M0 and Pico SDK let's you override couple macros to hook into its locking code.

demondemidi 3 days ago

Gave up on freertos because printf doesn’t work? someone didn’t have the patience to read the documentation.

a-dub 4 days ago

hm. haven't seen camelCase hard real-time task definitions since the vxworks days!

snvzz 4 days ago

AIUI seL4 runs on that chip.

It would likely not be a bad option.

  • rmu09 4 days ago

    Do you have more information? seL4 seems to run on raspberry pi 3/4, but I didn't find any mention of cortex-m class CPUs being supported.

    • snvzz 4 days ago

      Nevermind, I did not realize rp2040 is M0+ thus ARMv6.

      ARMv7 is the minimum, and MMU is required. RISC-V is recommended.

  • TickleSteve 4 days ago

    This is a microcontroller (Cortex-M) not an application processor (Cortex-A).