From: Dmitry Torokhov on
On Mon, Dec 07, 2009 at 01:34:10PM -0200, Mauro Carvalho Chehab wrote:
>
> > Scancodes in input system never been real scancodes. Even if you look
> > into atkbd it uses some synthetic data composed out of real scancodes
> > sent to the keyboard, and noone cares. If you are unsatisfied with
> > mapping you fire up evtest, press the key, take whatever the driver
> > [mis]represents as a scancode and use it to load the new definition. And
> > you don't care at all whether the thing that driver calls cancode makes
> > any sense to the hardware device.
>
> We used a mis-represented scancode, but this proofed to be a broken design
> along the time.
>
> For users, whatever the scancode "cookie" means, the same IR device should
> provide the same "cookie" no matter what IR receiver is used, since the same
> IR may be found on different devices, or the user can simply buy a new card
> and opt to use their old IR (there are very good reasons for that, since
> several new devices are now coming with small IR's that has half of the
> keys of the ones available at the older models).

OK, this is a fair point. We need to keep the "scancodes" stable across
receivers.

However I am not sure if the "index" approach is the best - it will not
work well if driver decides to implement the keymap using data structure
different from array, let's say linked list or a hash table. Lists by
their nature do not have a stable index and even if we were to generate
one "on fly" we could not rely on it for subsequent EVIOSKEYCODE: some
other program may cause insertion or deletion of an element making the
artificial index refer to another entry in the map.

While extending scancode size is pretty straightforward (well, almost
;) ) I am not sure what is the best way to enumerate keymap for a given
device.

--
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Dmitry Torokhov on
On Sun, Dec 06, 2009 at 09:34:26PM +0100, Krzysztof Halasa wrote:
> Jon Smirl <jonsmirl(a)gmail.com> writes:
>
> >> Once again: how about agreement about the LIRC interface
> >> (kernel-userspace) and merging the actual LIRC code first? In-kernel
> >> decoding can wait a bit, it doesn't change any kernel-user interface.
> >
> > I'd like to see a semi-complete design for an in-kernel IR system
> > before anything is merged from any source.
>
> This is a way to nowhere, there is no logical dependency between LIRC
> and input layer IR.
>
> There is only one thing which needs attention before/when merging LIRC:
> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> actually, making a correct IR core design without the LIRC merged can be
> only harder.

This sounds like "merge first, think later"...

The question is why we need to merge lirc interface right now, before we
agreed on the sybsystem architecture? Noone _in kernel_ user lirc-dev
yet and, looking at the way things are shaping, no drivers will be
_directly_ using it after it is complete. So, even if we merge it right
away, the code will have to be restructured and reworked. Unfortunately,
just merging what Jarod posted, will introduce sysfs hierarchy which
is userspace interface as well (although we not as good maintaining it
at times) and will add more constraints on us.

That is why I think we should go the other way around - introduce the
core which receivers could plug into and decoder framework and once it
is ready register lirc-dev as one of the available decoders.

--
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Dmitry Torokhov on
On Mon, Dec 07, 2009 at 09:08:57PM +0100, Krzysztof Halasa wrote:
> Dmitry Torokhov <dmitry.torokhov(a)gmail.com> writes:
>
> >> There is only one thing which needs attention before/when merging LIRC:
> >> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> >> actually, making a correct IR core design without the LIRC merged can be
> >> only harder.
> >
> > This sounds like "merge first, think later"...
>
> I'd say "merge the sane agreed and stable things first and think later
> about improvements".
>
> > The question is why we need to merge lirc interface right now, before we
> > agreed on the sybsystem architecture?
>
> Because we need the features and we can't improve something which is
> outside the kernel. What "subsystem architecture" do you want to
> discuss? Unrelated (input layer) interface?
>

No, the IR core responsible for registering receivers and decoders.

> Those are simple things. The only part which needs to be stable is the
> (in this case LIRC) kernel-user interface.

For which some questions are still open. I believe Jon just oulined some
of them.

>
> > Noone _in kernel_ user lirc-dev
> > yet and, looking at the way things are shaping, no drivers will be
> > _directly_ using it after it is complete. So, even if we merge it right
> > away, the code will have to be restructured and reworked.
>
> Sure. We do this constantly to every part of the kernel.

No we do not. We do not merge something that we expect to rework almost
completely (no, not the lirc-style device userspace inetrface, although
even it is not completely finalized I believe, but the rest of the
subsystem).

>
> > Unfortunately,
> > just merging what Jarod posted, will introduce sysfs hierarchy which
> > is userspace interface as well (although we not as good maintaining it
> > at times) and will add more constraints on us.
>
> Then perhaps it should be skipped, leaving only the things udev needs to
> create /dev/ entries. They don't have to be particularly stable.
> Perhaps it should go to the staging first. We can't work with the code
> outside the kernel, staging has not such limitation.

OK, say we add this to staging as is. What is next? Who will be using
this code that is now in staging? Do we encougrage driver's writers to
hook into it (given that we intend on redoing it soon)? Do something
else?

>
> > That is why I think we should go the other way around - introduce the
> > core which receivers could plug into and decoder framework and once it
> > is ready register lirc-dev as one of the available decoders.
>
> That means all the work has to be kept and then merged "atomically",
> it seems there is a lack of manpower for this.

No, not at all. You merge core subsystem code, then start addig
decoders... In the meantime driver-writers could start preparing their
drivers to plug into it.

In the mean time out-of-tree LIRC can be used by consumers undisturbed.

--
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Dmitry Torokhov on
On Mon, Dec 07, 2009 at 09:44:14PM -0200, Mauro Carvalho Chehab wrote:
> Let me add my view for those questions.
>
> Jon Smirl wrote:
> > On Sun, Dec 6, 2009 at 3:34 PM, Krzysztof Halasa <khc(a)pm.waw.pl> wrote:
> >> Jon Smirl <jonsmirl(a)gmail.com> writes:
> >>
> >>>> Once again: how about agreement about the LIRC interface
> >>>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
> >>>> decoding can wait a bit, it doesn't change any kernel-user interface.
> >>> I'd like to see a semi-complete design for an in-kernel IR system
> >>> before anything is merged from any source.
> >> This is a way to nowhere, there is no logical dependency between LIRC
> >> and input layer IR.
> >>
> >> There is only one thing which needs attention before/when merging LIRC:
> >> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> >> actually, making a correct IR core design without the LIRC merged can be
> >> only harder.
> >
> > Here's a few design review questions on the LIRC drivers that were posted....
> >
> > How is the pulse data going to be communicated to user space?
>
> lirc_dev will implement a revised version of the lirc API. I'm assuming that
> Jerod and Christoph will do this review, in order to be sure that it is stable
> enough for kernel inclusion (as proposed by Gerd).
>
> > Can the pulse data be reported via an existing interface without
> > creating a new one?
>
> Raw pulse data should be reported only via lirc_dev, but it can be converted
> into a keycode and reported via evdev as well, via an existing interface.
>
> > Where is the documentation for the protocol?
>
> I'm not sure what you're meaning here. I've started a doc about IR at the media
> docbook. This is currently inside the kernel Documents/DocBook. If you want
> to browse, it is also available as:
>
> http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html
>
> For sure we need to better document the IR's, and explain the API's there.
>
> > Is it a device interface or something else?
>
> lirc_dev should create a device interface.
>
> > What about capabilities of the receiver, what frequencies?
> > If a receiver has multiple frequencies, how do you report what
> > frequency the data came in on?
>
> IMO, via sysfs.

We probably need to think what exactly we report through sysfs siunce it
is ABI of sorts.

>
> > What about multiple apps simultaneously using the pulse data?
>
> IMO, the better is to limit the raw interface to just one open.
>

Why woudl we want to do this? Quite often there is a need for "observer"
that maybe does not act on data but allows capturing it. Single-user
inetrfaces are PITA.

> > How big is the receive queue?
>
> It should be big enough to receive at least one keycode event. Considering that
> the driver will use kfifo (IMO, it is a good strategy, especially since you
> won't need any lock if just one open is allowed), it will require a power of two size.
>

Would not it be wither driver- or protocol-specific?

> > How does access work, root only or any user?
>
> IMO, it should be the same requirement as used by an input interface.
>
> > How are capabilities exposed, sysfs, etc?
>
> IMO, sysfs.
>
> > What is the interface for attaching an in-kernel decoder?
>
> IMO, it should use the kfifo for it. However, if we allow both raw data and
> in-kernel decoders to read data there, we'll need a spinlock to protect the
> kfifo.
>

I think Jon meant userspace interface for attaching particular decoder.

> > If there is an in-kernel decoder should the pulse data stop being
> > reported, partially stopped, something else?
>
> I don't have a strong opinion here, but, from the previous discussions, it
> seems that people want it to be double-reported by default. If so, I think
> we need to implement a command at the raw interface to allow disabling the
> in-kernel decoder, while the raw interface is kept open.

Why don't you simply let consumers decide where they will get their data?

>
> > What is the mechanism to make sure both system don't process the same pulses?
>
> I don't see a good way to avoid it.
>
> > Does it work with poll, epoll, etc?
> > What is the time standard for the data, where does it come from?
> > How do you define the start and stop of sequences?
> > Is receiving synchronous or queued?
> > What about transmit, how do you get pulse data into the device?
> > Transmitter frequencies?
> > Multiple transmitters?
> > Is transmitting synchronous or queued?
> > How big is the transmit queue?
>
> I don't have a clear answer for those. I'll let those to LIRC developers to answer.
>

--
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Dmitry Torokhov on
On Tue, Dec 08, 2009 at 09:17:42AM -0200, Mauro Carvalho Chehab wrote:
> Jon Smirl wrote:
> > On Mon, Dec 7, 2009 at 6:44 PM, Mauro Carvalho Chehab
> > <mchehab(a)redhat.com> wrote:
>
> >>> Where is the documentation for the protocol?
> >> I'm not sure what you're meaning here. I've started a doc about IR at the media
> >
> > What is the format of the pulse stream data coming out of the lirc device?
>
> AFAIK, it is at:
> http://www.lirc.org/html/index.html
>
> It would be nice to to add it to DocBook after integrating the API in kernel.
>
> >> docbook. This is currently inside the kernel Documents/DocBook. If you want
> >> to browse, it is also available as:
> >>
> >> http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html
> >>
> >> For sure we need to better document the IR's, and explain the API's there.
> >>
> >>> Is it a device interface or something else?
> >> lirc_dev should create a device interface.
> >>
> >>> What about capabilities of the receiver, what frequencies?
> >>> If a receiver has multiple frequencies, how do you report what
> >>> frequency the data came in on?
> >> IMO, via sysfs.
> >
> > Say you have a hardware device with two IR diodes, one at 38K and one
> > at 56K. Both of these receivers can get pulses. How do we tell the
> > user space app which frequency the pulses were received on? Seems to
> > me like there has to be a header on the pulse data indicating the
> > received carrier frequency. There is also baseband signaling. sysfs
> > won't work for this because of the queuing latency.
>
> Simply create two interfaces. One for each IR receiver. At sysfs, you'll
> have /sys/class/irrcv/irrcv0 for the first one and /sys/class/irrcv/irrcv1.

Yes, please. Distinct hardware - distinct representation in the kernel.
This is the most sane way.

....
> >>
> >>> What is the interface for attaching an in-kernel decoder?
> >> IMO, it should use the kfifo for it. However, if we allow both raw data and
> >> in-kernel decoders to read data there, we'll need a spinlock to protect the
> >> kfifo.

Probably we should do what input layer does - the data is pushed into
all handlers that are signed up for it and they can deal with it at
their leisure.

> >>
> >>> If there is an in-kernel decoder should the pulse data stop being
> >>> reported, partially stopped, something else?
> >> I don't have a strong opinion here, but, from the previous discussions, it
> >> seems that people want it to be double-reported by default. If so, I think
> >> we need to implement a command at the raw interface to allow disabling the
> >> in-kernel decoder, while the raw interface is kept open.
> >
> > Data could be sent to the in-kernel decoders first and then if they
> > don't handle it, send it to user space.

You do not know what userspace wants to do with the data. They may want
to simply observe it, store or do something else. Since we do provide
interface for such raw[ish] data we just need to transmit it to
userpsace as long as there are users (i.e. interface is open).

>
> Hmm... like adding a delay if the raw userspace is open and, if the raw userspace
> doesn't read all pulse data, it will send via in-kernel decoder instead? This can
> work, but I'm not sure if this is the better way, and will require some logic to
> synchronize lirc_dev and IR core modules. Also, doing it key by key will introduce
> some delay.
>
> If you're afraid of having the userspace app hanged and having no IR output,
> it would be simpler to just close the raw interface if an available data won't be
> read after a bigger timeout (3 seconds? 5 seconds?).

We can not foresee all use cases. Just let all parties signed up for the
data get and process it, do not burden the core with heuristics.

--
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/