From: Igor Stoppa on
ext Felipe Contreras wrote:

> I think this information can be obtained dynamically while the
> application is running,

yes, that was the idea

> and perhaps the limits can be stored. It would
> be pretty difficult for the applications to give this kind of
> information because there are so many variables.
>
> For example, an media player can tell you: this clip has 24 fps, but
> if the user is moving the time slider, the fps would increase and drop
> very rapidly, and how much depends at least on the container format
> and type of seek.
>

I doubt that belongs to typical QoS. Maybe the target could be to be
able to decode a sequence of i-frames?
> A game or a telephony app could tell you "I need real-time priority"
> but so much as giving the details of latency and bandwidth? I find
> that very unlikely.
>

from my gaming days the games were still evaluated in fps ... maybe i
made the wrong assumption?

A telephony app should still be able to tell if it's dropping audio frames.

In all cases there should be some device independent limit - like: what
is the sort of degradation that is considered acceptable by the typical
user?

Tuning might be offered, but at least this should set some sane set of
defaults.

igor
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Arve Hjønnevåg on
2010/5/30 Rafael J. Wysocki <rjw(a)sisk.pl>:
> On Saturday 29 May 2010, Arve Hj�nnev�g wrote:
>> 2010/5/29 Rafael J. Wysocki <rjw(a)sisk.pl>:
>> > On Saturday 29 May 2010, Arve Hj�nnev�g wrote:
>> >> 2010/5/28 Rafael J. Wysocki <rjw(a)sisk.pl>:
>> >> > On Friday 28 May 2010, Arve Hj�nnev�g wrote:
>> >> >> On Fri, May 28, 2010 at 1:44 AM, Florian Mickler <florian(a)mickler.org> wrote:
>> >> >> > On Thu, 27 May 2010 20:05:39 +0200 (CEST)
>> >> >> > Thomas Gleixner <tglx(a)linutronix.de> wrote:
>> >> > ...
>> >> >> > To integrate this with the current way of doing things, i gathered it
>> >> >> > needs to be implemented as an idle-state that does the suspend()-call?
>> >> >> >
>> >> >>
>> >> >> I think it is better no not confuse this with idle. Since initiating
>> >> >> suspend will cause the system to become not-idle, I don't think is is
>> >> >> beneficial to initiate suspend from idle.
>> >> >
>> >> > It is, if the following two conditions hold simultaneously:
>> >> >
>> >> > (a) Doing full system suspend is ultimately going to bring you more energy
>> >> > � �savings than the (presumably lowest) idle state you're currently in.
>> >> >
>> >> > (b) You anticipate that the system will stay idle for a considerably long time
>> >> > � �such that it's worth suspending.
>> >> >
>> >>
>> >> I still don't think this matters. If you are waiting for in interrupt
>> >> that cannot wake you up from suspend, then idle is not an indicator
>> >> that it is safe to enter suspend. I also don't think you can avoid any
>> >> user-space suspend blockers by delaying suspend until the system goes
>> >> idle since any page fault could cause it to go idle. Therefore I don't
>> >> see a benefit in delaying suspend until idle when the last suspend
>> >> blocker is released (it would only mask possible race conditions).
>> >
>> > I wasn't referring to suspend blockers, but to the idea of initiating full
>> > system suspend from idle, which I still think makes sense. �If you are
>> > waiting for an interrupt that cannot wake you from suspend, then
>> > _obviously_ suspend should not be started. �However, if you're not waiting for
>> > such an interrupt and the (a) and (b) above hold, it makes sense to start
>> > suspend from idle.
>> >
>>
>> What about timers? When you suspend timers stop (otherwise it is just
>> a deep-idle mode), and this could cause problems. Some drivers rely on
>> timers if the hardware does not have a completion interrupt. It is not
>> uncommon to see send command x then wait 200ms in a some hardware
>> specs.
>
> QoS should be used in such cases.
>

I think it makes more sense to block suspend while wakeup events are
pending than blocking it everywhere timers are used by code that could
be called while handling wakeup events or other critical work. Also,
even if you did block suspend everywhere timers where used you still
have the race where a wakeup interrupt happens right after you decided
to suspend. In other words, you still need to block suspend in all the
same places as with the current opportunistic suspend code, so what is
the benefit of delaying suspend until idle?

--
Arve Hj�nnev�g
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Arve Hjønnevåg on
2010/5/29 Alan Stern <stern(a)rowland.harvard.edu>:
> On Sat, 29 May 2010, Arve Hj�nnev�g wrote:
>
>> > In place of in-kernel suspend blockers, there will be a new type of QoS
>> > constraint -- call it QOS_EVENTUALLY. �It's a very weak constraint,
>> > compatible with all cpuidle modes in which runnable threads are allowed
>> > to run (which is all of them), but not compatible with suspend.
>> >
>> This sound just like another API rename. It will work, but given that
>> suspend blockers was the name least objectionable last time around,
>> I'm not sure what this would solve.
>
> It's not just a rename. �By changing this into a QoS constraint, we
> make it more generally useful. �Instead of standing on its own, it
> becomes part of the PM-QOS framework.
>

We cannot use the existing pm-qos framework. It is not safe to call
from atomic context. Also, it does not have any state constraints, so
it iterates over every registered constraint each time one of them
changes. Nor does is currently provide any stats for debugging.

The original wakelock patchset supported a wakelock type so it could
be used to block more then suspend, but I had to remove this because
it "overlapped" with pm-qos. So, yes I do consider this just another
rename.

>> > There is no /sys/power/policy file. �In place of opportunistic suspend,
>> > we have "QoS-based suspend". �This is initiated by userspace writing
>> > "qos" to /sys/power/state, and it is very much like suspend-to-RAM.
>>
>> Why do you want to tie it to a specific state?
>
> I don't. �I suggested making it a veriant of suspend-to-RAM merely
> because that's what you were using. �But Nigel's suggestion of having
> "qos" variants of all the different suspend states makes sense.
>
>> > However a QoS-based suspend fails immediately if there are any active
>>
>> Fail or block? Your next paragraph said that it blocks for
>> QOS_EVENTUALLY, but if normal constraints fail, you are still stuck in
>> a retry loop.
>
> Normal (i.e., non QOS_EVENTUALLY) constraints aren't part of the
> Android use case, so it wasn't clear how they should be treated. �On
> further thought, it probably makes more sense to block for them too
> instead of failing immediately.
>
>> > normal QoS constraints incompatible with system suspend, in other
>> > words, any constraints requiring a throughput > 0 or an interrupt
>> > latency shorter than the time required for a suspend-to-RAM/resume
>> > cycle.
>> >
>> > If no such constraints are active, the QoS-based suspend blocks in an
>> > interruptible wait until the number of active QOS_EVENTUALLY
>>
>> How do you implement this?
>
> I'm not sure what you mean. �The same way you implement any
> interruptible wait.
>

I mean what should it wait on so that it gets interrupted by a
userspace ipc call. I guess you want to send a signal in addition to
the ipc. I still don't know why you want to do it this way though. It
seems much simpler to just return immedeately and allow the same
thread to cancel the request with another write.

>> > � � � �for (;;) {
>> > � � � � � � � �while (any IPC requests remain)
>> > � � � � � � � � � � � �handle them;
>> > � � � � � � � �if (any processes need to prevent suspend)
>> > � � � � � � � � � � � �sleep;
>> > � � � � � � � �else
>> > � � � � � � � � � � � �write "qos" to /sys/power/state;
>> > � � � �}
>> >
>> > The idea is that receipt of a new IPC request will cause a signal to be
>> > sent, interrupting the sleep or the "qos" write.
>>
>> What happen if the signal is right before (or even right after)
>> calling write "qos". How does the signal handler stop the write?
>
> You're right, this is a serious problem. �The process would have to
> give the kernel a signal mask to be used during the wait, as in ppoll
> or pselect. �There ought to be a way to do this or something
> equivalent.
>
> Alan Stern
>
>



--
Arve Hj�nnev�g
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Stern on
On Mon, 31 May 2010, Arve Hj�nnev�g wrote:

> >> This sound just like another API rename. It will work, but given that
> >> suspend blockers was the name least objectionable last time around,
> >> I'm not sure what this would solve.
> >
> > It's not just a rename. �By changing this into a QoS constraint, we
> > make it more generally useful. �Instead of standing on its own, it
> > becomes part of the PM-QOS framework.
> >
>
> We cannot use the existing pm-qos framework. It is not safe to call
> from atomic context. Also, it does not have any state constraints, so
> it iterates over every registered constraint each time one of them
> changes. Nor does is currently provide any stats for debugging.
>
> The original wakelock patchset supported a wakelock type so it could
> be used to block more then suspend, but I had to remove this because
> it "overlapped" with pm-qos. So, yes I do consider this just another
> rename.

You're missing the point. The fact that wakelocks "overlapped" with
pm-qos is _good_. It means that you can implement what you need within
the pm-qos framework, if you expand the framework's capabilities (add
the ability to do things in atomic context, add the ability to collect
stats for debugging, etc.).

Maybe this would require redesigning a large part of pm-qos. I'm not
very familiar with it, so I don't know what would be involved. Still,
it seems like a reasonable approach, given what you need to accomplish.

> >> > If no such constraints are active, the QoS-based suspend blocks in an
> >> > interruptible wait until the number of active QOS_EVENTUALLY
> >>
> >> How do you implement this?
> >
> > I'm not sure what you mean. �The same way you implement any
> > interruptible wait.
> >
>
> I mean what should it wait on so that it gets interrupted by a
> userspace ipc call. I guess you want to send a signal in addition to
> the ipc.

If the IPC is carried out over a Unix socket, you can get SIGIO
signals for free. But yes, if necessary the client could send a signal
along with its request.

> I still don't know why you want to do it this way though. It
> seems much simpler to just return immedeately and allow the same
> thread to cancel the request with another write.

I suggested doing it this way because it is as close as possible to the
existing API. A two-step submit/cancel approach would be a larger
change -- but it certainly would work. I have no objection to it.

The main idea behind this part of the proposal was to get rid of the
new userspace-suspend-blocker API (along with /sys/power/policy, which
Pavel objects to). Equivalent functionality can be achieved by making
only small changes to the existing /sys/power/state interface (and
perhaps somewhat larger changes to the userspace daemon); the exact
details of the changes aren't critical.

Alan Stern


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on
On Sat, 2010-05-29 at 11:10 -0500, James Bottomley wrote:
> > Correct, I strongly oppose using suspend. Not running runnable tasks is
> > not a sane solution.
>
> Look, this is getting into the realms of a pointless semantic quibble.
> The problem is that untrusted tasks need to be forcibly suspended when
> they have no legitimate work to do and the user hasn't authorised them
> to continue even if the scheduler sees them as runnable. Whether that's
> achieved by suspending the entire system or forcibly idling the tasks
> (using blocking states or freezers or something) so the scheduler can
> suspend from idle is something to be discussed,

So what happens if you task is CPU bound and gets suspended and is
holding a resource (lock, whatever) that is required by someone else
that didn't get suspended?

That's the classic inversion problem, and is caused by not running
runnable tasks.

> but the net result is
> that we have to stop a certain set of tasks in such a way that they can
> still receive certain external events ... semantically, this is
> equivalent to not running runnable tasks in my book.

Why would be care about external events? Clearly these apps are ill
behaved, otherwise they would have listened to the environment telling
them to idle.

Why would you try to let buggy apps work as intended instead of break
them as hard as possible? Such policy promotes crappy code since people
get away with it.

> (Perhaps this whole
> thing is because the word runnable means different things ... I'm
> thinking a task that would consume power ... are you thinking in the
> scheduler R state?)

Clearly I mean TASK_RUNNABLE, if not that the scheduler doesn't care.

> Realistically, the main thing we need to do is stop timers posted
> against the task (which is likely polling in a main loop, that being the
> usual form of easy to write but power crazy app behaviour) from waking
> the task and bringing the system out of suspend (whether from idle or
> forced).

Sure, that same main loop will probably receive a message along the
lines of, 'hey, screen is off, we ought to go sleep'. If after that it
doesn't listen, and more serious messages don't get responded to, simply
kill the thing.

Again, there is no reason what so ever to tolerate broken apps, it only
promotes crappy apps.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/