From: Florian Mickler on
On Thu, 03 Jun 2010 10:29:52 -0500
James Bottomley <James.Bottomley(a)suse.de> wrote:


> > So no reinvention. Just using a common scheme.
>
> By reinvention I meant open coding a common pattern for which the kernel
> already has an API. (Whether we go with hash buckets or plists).
>
> James
>

Ah, plists.h! Thanks for the pointer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Kevin Hilman on
"Gross, Mark" <mark.gross(a)intel.com> writes:

>>-----Original Message-----
>>From: Kevin Hilman [mailto:khilman(a)deeprootsystems.com]
>>Sent: Thursday, June 03, 2010 7:43 AM
>>To: Peter Zijlstra
>>Cc: Alan Cox; Gross, Mark; Florian Mickler; James Bottomley; Arve
>>Hj�nnev�g; Neil Brown; tytso(a)mit.edu; LKML; Thomas Gleixner; Linux OMAP
>>Mailing List; Linux PM; felipe.balbi(a)nokia.com
>>Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)
>>
>>Peter Zijlstra <peterz(a)infradead.org> writes:
>>
>>> On Thu, 2010-06-03 at 11:03 +0100, Alan Cox wrote:
>>>> > [mtg: ] This has been a pain point for the PM_QOS implementation.
>>>> They change the constrain back and forth at the transaction level of
>>>> the i2c driver. The pm_qos code really wasn't made to deal with such
>>>> hot path use, as each such change triggers a re-computation of what
>>>> the aggregate qos request is.
>>>>
>>>> That should be trivial in the usual case because 99% of the time you can
>>>> hot path
>>>>
>>>> the QoS entry changing is the latest one
>>>> there have been no other changes
>>>> If it is valid I can use the cached previous aggregate I cunningly
>>>> saved in the top QoS entry when I computed the new one
>>>>
>>>> (ie most of the time from the kernel side you have a QoS stack)
>>>
>>> Why would the kernel change the QoS state of a task? Why not have two
>>> interacting QoS variables, one for the task, one for the subsystem in
>>> question, and the action depends on their relative value?
>>
>>Yes, having a QoS parameter per-subsystem (or even per-device) is very
>>important for SoCs that have independently controlled powerdomains.
>>If all devices/subsystems in a particular powerdomain have QoS
>>parameters that permit, the power state of that powerdomain can be
>>lowered independently from system-wide power state and power states of
>>other power domains.
>>
> This seems similar to that pm_qos generalization into bus drivers we where
> waving our hands at during the collab summit in April? We never did get
> into meaningful detail at that time.

The hand-waving was around how to generalize it into the driver-model,
or PM QoS. We're already doing this for OMAP, but in an OMAP-specific
way, but it's become clear that this is something useful to
generalize.

Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: James Bottomley on
On Thu, 2010-06-03 at 09:58 -0700, Kevin Hilman wrote:
> "Gross, Mark" <mark.gross(a)intel.com> writes:
>
> >>-----Original Message-----
> >>From: Kevin Hilman [mailto:khilman(a)deeprootsystems.com]
> >>Sent: Thursday, June 03, 2010 7:43 AM
> >>To: Peter Zijlstra
> >>Cc: Alan Cox; Gross, Mark; Florian Mickler; James Bottomley; Arve
> >>Hjønnevåg; Neil Brown; tytso(a)mit.edu; LKML; Thomas Gleixner; Linux OMAP
> >>Mailing List; Linux PM; felipe.balbi(a)nokia.com
> >>Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)
> >>
> >>Peter Zijlstra <peterz(a)infradead.org> writes:
> >>
> >>> On Thu, 2010-06-03 at 11:03 +0100, Alan Cox wrote:
> >>>> > [mtg: ] This has been a pain point for the PM_QOS implementation.
> >>>> They change the constrain back and forth at the transaction level of
> >>>> the i2c driver. The pm_qos code really wasn't made to deal with such
> >>>> hot path use, as each such change triggers a re-computation of what
> >>>> the aggregate qos request is.
> >>>>
> >>>> That should be trivial in the usual case because 99% of the time you can
> >>>> hot path
> >>>>
> >>>> the QoS entry changing is the latest one
> >>>> there have been no other changes
> >>>> If it is valid I can use the cached previous aggregate I cunningly
> >>>> saved in the top QoS entry when I computed the new one
> >>>>
> >>>> (ie most of the time from the kernel side you have a QoS stack)
> >>>
> >>> Why would the kernel change the QoS state of a task? Why not have two
> >>> interacting QoS variables, one for the task, one for the subsystem in
> >>> question, and the action depends on their relative value?
> >>
> >>Yes, having a QoS parameter per-subsystem (or even per-device) is very
> >>important for SoCs that have independently controlled powerdomains.
> >>If all devices/subsystems in a particular powerdomain have QoS
> >>parameters that permit, the power state of that powerdomain can be
> >>lowered independently from system-wide power state and power states of
> >>other power domains.
> >>
> > This seems similar to that pm_qos generalization into bus drivers we where
> > waving our hands at during the collab summit in April? We never did get
> > into meaningful detail at that time.
>
> The hand-waving was around how to generalize it into the driver-model,
> or PM QoS. We're already doing this for OMAP, but in an OMAP-specific
> way, but it's become clear that this is something useful to
> generalize.

Do you have a pointer to the source and description? It might be useful
to look at to do a reality check on what we're talking about.

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Muralidhar, Rajeev D on
Hi Kevin, Mark, all,

Yes, from our brief discussions at ELC, and all the ensuing discussions that have happened in the last few weeks, it certainly seems like a good time to think about:
- what is a good model to tie up device idleness, latencies, constraints with cpu idle infrastructure - extensions to PM_QOS, part of what is being discussed, especially Kevin's earlier mail about QOS parameter per subsystem/device that may have independent clock/power domain control.

- what is a good infrastructure to subsequently allow platform-specific low power state - extensions to cpuidle infrastructure to allow platform-wide low power state? Exact conditions for such entry/exit into low power state (latency, wake, etc.) could be platform specific.

Is it a good idea to discuss about a model that could be applicable to other SOCs/platforms as well?

Thanks
Rajeev


-----Original Message-----
From: linux-pm-bounces(a)lists.linux-foundation.org [mailto:linux-pm-bounces(a)lists.linux-foundation.org] On Behalf Of Kevin Hilman
Sent: Thursday, June 03, 2010 10:28 PM
To: Gross, Mark
Cc: Neil Brown; tytso(a)mit.edu; Peter Zijlstra; felipe.balbi(a)nokia.com; LKML; Florian Mickler; James Bottomley; Thomas Gleixner; Linux OMAP Mailing List; Linux PM; Alan Cox
Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)

"Gross, Mark" <mark.gross(a)intel.com> writes:

>>-----Original Message-----
>>From: Kevin Hilman [mailto:khilman(a)deeprootsystems.com]
>>Sent: Thursday, June 03, 2010 7:43 AM
>>To: Peter Zijlstra
>>Cc: Alan Cox; Gross, Mark; Florian Mickler; James Bottomley; Arve
>>Hj�nnev�g; Neil Brown; tytso(a)mit.edu; LKML; Thomas Gleixner; Linux OMAP
>>Mailing List; Linux PM; felipe.balbi(a)nokia.com
>>Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)
>>
>>Peter Zijlstra <peterz(a)infradead.org> writes:
>>
>>> On Thu, 2010-06-03 at 11:03 +0100, Alan Cox wrote:
>>>> > [mtg: ] This has been a pain point for the PM_QOS implementation.
>>>> They change the constrain back and forth at the transaction level of
>>>> the i2c driver. The pm_qos code really wasn't made to deal with such
>>>> hot path use, as each such change triggers a re-computation of what
>>>> the aggregate qos request is.
>>>>
>>>> That should be trivial in the usual case because 99% of the time you can
>>>> hot path
>>>>
>>>> the QoS entry changing is the latest one
>>>> there have been no other changes
>>>> If it is valid I can use the cached previous aggregate I cunningly
>>>> saved in the top QoS entry when I computed the new one
>>>>
>>>> (ie most of the time from the kernel side you have a QoS stack)
>>>
>>> Why would the kernel change the QoS state of a task? Why not have two
>>> interacting QoS variables, one for the task, one for the subsystem in
>>> question, and the action depends on their relative value?
>>
>>Yes, having a QoS parameter per-subsystem (or even per-device) is very
>>important for SoCs that have independently controlled powerdomains.
>>If all devices/subsystems in a particular powerdomain have QoS
>>parameters that permit, the power state of that powerdomain can be
>>lowered independently from system-wide power state and power states of
>>other power domains.
>>
> This seems similar to that pm_qos generalization into bus drivers we where
> waving our hands at during the collab summit in April? We never did get
> into meaningful detail at that time.

The hand-waving was around how to generalize it into the driver-model,
or PM QoS. We're already doing this for OMAP, but in an OMAP-specific
way, but it's become clear that this is something useful to
generalize.

Kevin
_______________________________________________
linux-pm mailing list
linux-pm(a)lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/linux-pm
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Rafael J. Wysocki on
On Thursday 03 June 2010, James Bottomley wrote:
> On Thu, 2010-06-03 at 00:10 -0700, Arve Hjønnevåg wrote:
> > On Wed, Jun 2, 2010 at 10:40 PM, mark gross <640e9920(a)gmail.com> wrote:
> > > On Wed, Jun 02, 2010 at 09:54:15PM -0700, Brian Swetland wrote:
> > >> On Wed, Jun 2, 2010 at 8:18 PM, mark gross <640e9920(a)gmail.com> wrote:
> > >> > On Wed, Jun 02, 2010 at 02:58:30PM -0700, Arve Hjønnevåg wrote:
> > >> >>
> > >> >> The list is not short. You have all the inactive and active
> > >> >> constraints on the same list. If you change it to a two level list
> > >> >> though, the list of unique values (which is the list you have to walk)
> > >> >> may be short enough for a tree to be overkill.
> > >> >
> > >> > what have you seen in practice from the wake-lock stats?
> > >> >
> > >> > I'm having a hard time seeing where you could get more than just a
> > >> > handfull. However; one could go to a dual list (like the scheduler) and
> > >> > move inactive nodes from an active to inactive list, or we could simply
> > >> > remove them from the list uppon inactivity. which would would well
> > >> > after I change the api to have the client allocate the memory for the
> > >> > nodes... BUT, if your moving things in and out of a list a lot, I'm not
> > >> > sure the break even point where changing the structure helps.
> > >> >
> > >> > We'll need to try it.
> > >> >
> > >> > I think we will almost never see more than 10 list elements.
> > >> >
> > >> > --mgross
> > >> >
> > >> >
> > >>
> > >> I see about 80 (based on the batteryinfo dump) on my Nexus One
> > >> (QSD8250, Android Froyo):
> > >
> > > shucks.
> > >
> > > well I think for a pm_qos class that has boolean dynamic range we can
> > > get away with not walking the list on every request update. we can use
> > > a counter, and the list will be for mostly for stats.
> > >
> >
> > Did you give any thought to my suggestion to only use one entry per
> > unique value on the first level list and then use secondary lists of
> > identical values. That way if you only have two constraints values the
> > list you have to walk when updating a request will never have more
> > than two entries regardless of how many total request you have.
> >
> > A request update then becomes something like this:
> > if on primary list {
> > unlink from primary list
> > if secondary list is not empty
> > get next secondary entry and add in same spot on primary list
> > }
> > unlink from secondary list
> > find new spot on primary list
> > if already there
> > add to secondary list
> > else
> > add to primary list
>
> This is just reinventing hash bucketed lists. To get the benefits, all
> we do is implement an N state constraint as backed by an N bucketed hash
> list, which the kernel already has all the internal mechanics for.

Agreed.

Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/