From: zeta_no on
Hi to all,

My research about real time programming left me with some unanswered
questions.

Proposition 1: The problem with a general kernel, let say a genuine
Linux, is that even stripped off, hard real time can not be guaranteed
because we can't fully control the scheduling policy of the kernel.
Therefore, any loaded module in kernel or user space, will be given
some CPU time without our consent and, in time, be unpredictable. To
grossly simplify, let say I want to build a small drone and I decide
to put a minimal Linux install bundled with CRON on the embedded
hardware.

Question 1: There is a possibility, that during my flight, the kernel
accept to service the CRON scheduler, in between sensor readings? Am
I right?

Question 2: Does the use of Ada, with its real time capabilities
really helps to achieve real time if we know that ultimately, its the
kernel who decides what to service. Does the real time promises are
based on the assumption that the designer are brilliant enough to
disable any OS loadable module not required by the embedded
application? If so, this would be a problem for me, because I am far
from being a kernel literate who knows everything that is going on
inside a particular OS. Maybe a stripped OS is not that resource
consuming to be worried about?

Question 3: Does the Real Time Annex helps with that? I mean trying to
make real time programming on a general purpose OS.

Question 4: Now, if the best option is to use a real time kernel, what
about the need for a USB camera and some specialized library like
OpenCV for making some computer vision on a multi-core architecture?
This kind of support on real time kernel looks almost inexistent, let
say in open source options like RTEMS or MarteOS.

Question 5: Does RTLinux would be my best bet, or should I try to use
a GigE camera and port OpenCV myself onto RTEMS. I just want insight
on global issues here, example of typical decisions made in the design
of a general robotic embedded system.

That's it for now.

Thank you,

Olivier Henley

From: Dmitry A. Kazakov on
On Sun, 16 May 2010 21:19:00 -0700 (PDT), zeta_no wrote:

> Question 1: There is a possibility, that during my flight, the kernel
> accept to service the CRON scheduler, in between sensor readings? Am
> I right?

Depends on the effective priorities.

> Question 2: Does the use of Ada, with its real time capabilities
> really helps to achieve real time if we know that ultimately, its the
> kernel who decides what to service.

No, in general.

However some Ada vendors provide task scheduling independent on the OS.
I.e. the Ada program has just one OS thread which internally runs all the
tasks. If Ada thread is never preemted when it does something that will be
real-time.

Then there exist bare board Ada distributions, i.e. in effect you trow your
kernel away, and replace with another mini OS, e.g. INTEGRITY.

> Does the real time promises are
> based on the assumption that the designer are brilliant enough to
> disable any OS loadable module not required by the embedded
> application?

That is what an real-time OS is about. You have to be able to plan all OS
activities, in particular by assigning them right priorities etc.

> Maybe a stripped OS is not that resource
> consuming to be worried about?

I think you could start with giving your Ada program the highest possible
priority.

> Question 3: Does the Real Time Annex helps with that? I mean trying to
> make real time programming on a general purpose OS.

Yes, if fairly implemented by the compiler vendor. But it cannot do things,
the OS cannot or forbids.

> Question 4: Now, if the best option is to use a real time kernel, what
> about the need for a USB camera and some specialized library like
> OpenCV for making some computer vision on a multi-core architecture?
> This kind of support on real time kernel looks almost inexistent, let
> say in open source options like RTEMS or MarteOS.

Hmm, I doubt USB can be considered real-time. But I see no obvious reason
why a computer vision application could not tolerate some jitter (say
100�s). I guess you just do not need "very hard" real-time.

> Question 5: Does RTLinux would be my best bet, or should I try to use
> a GigE camera and port OpenCV myself onto RTEMS. I just want insight
> on global issues here, example of typical decisions made in the design
> of a general robotic embedded system.

I didn't use RT Linux, I did VxWorks. Certainly you can implement
communication with the USB camera in Ada. Then you will be in full control
of what is going on.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Colin Paul Gloster on
On Mon, 17 May 2010, Dmitry A. Kazakov sent:

|---------------------------------------------------------------------------|
|"On Sun, 16 May 2010 21:19:00 -0700 (PDT), zeta_no wrote: |
| |
|[..] |
| |
|> Question 3: Does the Real Time Annex helps with that? I mean trying to |
|> make real time programming on a general purpose OS." |
|---------------------------------------------------------------------------|

A general operating system is not real-time, regardless of language.

|---------------------------------------------------------------------------|
|"[..] |
| |
|> Question 4: Now, if the best option is to use a real time kernel, what |
|> about the need for a USB camera and some specialized library like |
|> OpenCV for making some computer vision on a multi-core architecture? |
|> This kind of support on real time kernel looks almost inexistent, let |
|> say in open source options like RTEMS or MarteOS. |
| |
|Hmm, I doubt USB can be considered real-time. But I see no obvious reason |
|why a computer vision application could not tolerate some jitter (say |
|100µs). I guess you just do not need "very hard" real-time. |
| |
|[..]" |
|---------------------------------------------------------------------------|

USB is not real-time.
From: sjw on
On May 17, 9:30 am, "Dmitry A. Kazakov" <mail...(a)dmitry-kazakov.de>
wrote:

> Hmm, I doubt USB can be considered real-time. But I see no obvious reason
> why a computer vision application could not tolerate some jitter (say
> 100µs). I guess you just do not need "very hard" real-time.

Clearly you wouldn't want to plug just any random USB device into a
system which expected known, bounded response times. Not that anyone
is going to be plugging any random USB device into a drone, especially
in flight ...

But is there anything about USB which makes the latencies etc for a
particular set of devices unbounded? If not, why shouldn't USB be used
in an appropriate application?
From: Dmitry A. Kazakov on
On Mon, 17 May 2010 03:29:02 -0700 (PDT), sjw wrote:

> On May 17, 9:30�am, "Dmitry A. Kazakov" <mail...(a)dmitry-kazakov.de>
> wrote:
>
>> Hmm, I doubt USB can be considered real-time. But I see no obvious reason
>> why a computer vision application could not tolerate some jitter (say
>> 100�s). I guess you just do not need "very hard" real-time.
>
> But is there anything about USB which makes the latencies etc for a
> particular set of devices unbounded? If not, why shouldn't USB be used
> in an appropriate application?

The word "bus" probably. In a comparable case when industrial Ethernet is
made RT, they usually make some master controlling the frames on the wire
or else do them time triggered. One certainly could do something alike for
USB, if anybody cared. But no standard device would work with that.

On the other hand I think that the major contributor to the jitter is not
the hardware but the application software. One certainly could have
reasonably short stable latencies over USB.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de