Embedded Wednesdays
Embedded Wednesdays
Week 5 - I have a couple of tasks for you.
This week we will be playing around with real time operating systems.
In Embedded Wednesdays, we will be working with the FreeRTOS operating system, and the CMSIS code provided by ARM. All of this comes in our openSTM32.org installation, so nothing new to install.
What is a real time operating system (RTOS)? Let’s break it down; first an operating system is a program that manages the allocation of resources on a computer. In our system, the resources are CPU time, memory, access to shared resources, and the delivery of messages. You may expect that the operating system would provide things like network access, mail systems, text editors, well no, those are applications in these little computers. Granted, the system that we are using is quite feature poor compared to other RTOS’, but the cost to start is actually extremely low. Plus, some real products use this RTOS, so it seems to have a capability set that is high enough for many purposes.
Next, real time, this refers to processing signals as they are being measured. Instead of gathering requests to crunch overnight, a real time system deals with data continually. Plus the processing can be scheduled to happen at an exact time in the future, not “sometime” after the request is received. Real time systems work with predictable schedules.
There are two types of real time systems; hard and soft real time. Hard real time means that a system fails when a deadline is missed. An example would be deployment of the landing system on a Mars rover, if the deadline for deployment is missed, the system fails catastrophically. A soft real time system would be a pop machine where the product can be released immediately or one second late with no consequences.
Cool stuff
How big is that integer?
C was written at a time when computers were million dollar affairs that were ruled over by grey beards in hiking boots (in case a mountain popped up in the middle of the computer room). There were only a couple of models of computer to worry about, so grey beard decided that integers (int data type) should be “the natural size of the computer”. So an int on their machine was 18 bits (not 16). If you were on a later machine it would be 16, then 32, then 36, but it really didn’t matter because you only had one machine, and you only wrote code for that machine.
Scroll forward by 25 years and we have 8 bit Arduinos, 16 bit PIC16s and MSP430s, 32 bit real processors, and 64 bit things that would be considered supercomputers a decade ago but are now $500 desktop boxes that nobody wants.
We want to write C code for all of these processors, using our 64 bit machines, targeting our 32 bit processors, because we can. So, how big is an int? The answer is actually “it depends”. These days an int is defined by the guy who set up the C compiler. You can have 16 bit ints on an 8 bit system, whereby every for loop takes extra time since the 16 bit loop counter is incremented using synthetic 16 bit math on an 8 bit processor (ewwww).
At some point, before 1999, this problem was fixed. A header file was included, starting with C99, that is set up by the compiler writer, giving replacement data types that tell you exactly how wide your integers are, if they are signed or unsigned, and which ones are the fastest for your machine.
The file is called stdint.h and to use it, you place:
#include <stdint.h>
at the top of your file. Among other cool things, stdint defines data types for 8, 16, 32, and sometimes 64 bit signed and unsigned integers. For instance:
int16_t mediumSizedInteger;
Would define a signed 16 bit value that could take the values -32768 through 32767
uint32_t bigSizedInteger;
This would define an unsigned 32 bit value. The range of values is much different since it doesn’t support negative numbers. The range is 0 through 4,294,697,295. Unsigned values are great for counting things. They are used heavily in embedded systems since we work heavily with the physical world and negative things are pretty rare.
Why do you care? If you do a calculation and come up with a result that is too big for the variables that you are using, you will encounter something called overflow. For instance int8_t, with a range of -128 through 127. If you have a value of 125 and add 10 to it, you get -121. Now that is going to be a hard bug to find. The add sorta looked like this:
125 the original value
126 +1
127 +2
-128 +3, overflow to the maximum negative value
-127 +4
-126 +5
-125 +6
-124 +7
-123 +8
-122 +9
-121 +10, our final and very wrong answer.
Unsigned numbers act slightly different, they go from the maximum number to zero and continue from there. A uint8_t from 254 goes to 255, 0, 1 2, 3 and so on.
So, we can just use 32 bit values all over the place, right? On our processor, you could, but the amount of RAM that we have is small, and needlessly wasting it is futile. Take a look at the calculation you are doing and allocate enough space so that you don’t worry about overflowing.
For example:
We are calculating the pressure on a sensor on the ADC. We have a transfer function of 100 PSI per millivolt.
Our ADC is a 12 bit device (the result will be a 12 bit value) with 16 channels (16 pins are capable of attaching to signals). This converter will translate a voltage into a number. Zero volts will give a value of 0 and 3.3 volts will give a value of 4095. Of course voltages between 0 and 3.3 volts will give a value in proportion. This value is given back to your program as a number of counts.
Input voltages above 3.3 volts or below 0 volts can blow up the processor. Make sure you don’t do that.
Our pressure calculation would be (counts/4095) * 3300 * 100
We get our number of ADC counts need to figure out what the original voltage was. We start by figuring out what proportion of “full scale” our input sample was. If our count was zero, 0/4095 is zero. If our count was 4095, 4095/4095 = 1, full scale.
But wait, if our count was 3000, 3000/4095 = 0 because we are using integer math on a stupid little computer. We can use floating point numbers, a bit slower, or rearrange our math.
(counts/4095) * 3300 * 100 is the same as (counts * 3300 * 100) / 4095, and we don’t get that truncation that the integers provide.
Counts can vary from 0 to 4095, so we take the biggest value. 4095 * 3300 * 100 = 1,351,350,000 which is a seriously big number, but how many bits does it take to represent that? There is a trick, if we take the Log to the base 2 of that number, it will tell us exactly how many bits it takes to represent. Wait, my calculator does log to the base 10 and log to the base e, but not log to the base 2. If you take log10(value)/log10(2) you get log2(value).
The answer is 30.33 bits, so this calculation must be done using uint32_t values, like:
uint32_t counts, result;
result = (counts * 3300U * 100U) / 4095U;
<iso646.h>
and bitand or bitor
Newlib
https://sourceware.org/newlib/
Symbol table, linker file, and map file