From: David Marcus on
Tony Orlow wrote:
> David Marcus wrote:
> > Tony Orlow wrote:
> >> David Marcus wrote:
> >>> Tony Orlow wrote:
> >>>> David Marcus wrote:
> >>>>> Tony Orlow wrote:
> >>>>>> David Marcus wrote:
> >>>>>>> Tony Orlow wrote:
> >>>>>>>> I am beginning to realize just how much trouble the axiom of
> >>>>>>>> extensionality is causing here. That is what you're using, here, no? The
> >>>>>>>> sets are "equal" because they contain the same elements. That gives no
> >>>>>>>> measure of how the sets compare at any given point in their production.
> >>>>>>>> Sets as sets are considered static and complete. However, when talking
> >>>>>>>> about processes of adding and removing elements, the sets are not
> >>>>>>>> static, but changing with each event. When speaking about what is in the
> >>>>>>>> set at time t, use a function for that sum on t, assume t is continuous,
> >>>>>>>> and check the limit as t->0. Then you won't run into silly paradoxes and
> >>>>>>>> unicorns.
> >>>>>>> There is a lot of stuff in there. Let's go one step at a time. I believe
> >>>>>>> that one thing you are saying is this:
> >>>>>>>
> >>>>>>> |IN\OUT| = 0, but defining IN and OUT and looking at |IN\OUT| is not the
> >>>>>>> correct translation of the balls and vase problem into Mathematics.
> >>>>>>>
> >>>>>>> Do you agree with this statement?
> >>>>>> Yes.
> >>>>> OK. Since you don't like the |IN\OUT| translation, let's see if we can
> >>>>> take what you wrote, translate it into Mathematics, and get a
> >>>>> translation that you like.
> >>>>>
> >>>>> You say, "When speaking about what is in the set at time t, use a
> >>>>> function for that sum on t, assume t is continuous, and check the limit
> >>>>> as t->0."
> >>>>>
> >>>>> Taking this one step at a time, first we have "use a function for that
> >>>>> sum on t". How about we use the function V defined as follows?
> >>>>>
> >>>>> For n = 1,2,..., let
> >>>>>
> >>>>> A_n = -1/floor((n+9)/10),
> >>>>> R_n = -1/n.
> >>>>>
> >>>>> For n = 1,2,..., define a function B_n by
> >>>>>
> >>>>> B_n(t) = 1 if A_n <= t < R_n,
> >>>>> 0 if t < A_n or t >= R_n.
> >>>>>
> >>>>> Let V(t) = sum_n B_n(t).
> >>>>>
> >>>>> Next you say, "assume t is continuous". Not sure what you mean. Maybe
> >>>>> you mean assume the function is continuous? However, it seems that
> >>>>> either the function we defined (e.g., V) is continuous or it isn't,
> >>>>> i.e., it should be something we deduce, not assume. Let's skip this for
> >>>>> now. I don't think we actually need it.
> >>>>>
> >>>>> Finally, you write, "check the limit as t->0". I would interpret this as
> >>>>> saying that we should evaluate the limit of V(t) as t approaches zero
> >>>>> from the left, i.e.,
> >>>>>
> >>>>> lim_{t -> 0-} V(t).
> >>>>>
> >>>>> Do you agree that you are saying that the number of balls in the vase at
> >>>>> noon is lim_{t -> 0-} V(t)?
> >>>>>
> >>>> Find limits of formulas on numbers, not limits of sets.
> >>> I have no clue what you mean. There are no "limits of sets" in what I
> >>> wrote.
> >>>
> >>>> Here's what I said to Stephen:
> >>>>
> >>>> out(n) is the number of balls removed upon completion of iteration n,
> >>>> and is equal to n.
> >>>>
> >>>> in(n) is the number of balls inserted upon completion of iteration n,
> >>>> and is equal to 10n.
> >>>>
> >>>> contains(n) is the number of balls in the vase upon completion of
> >>>> iteration n, and is equal to in(n)-out(n)=9n.
> >>>>
> >>>> n(t) is the number of iterations completed at time t, equal to floor(-1/t).
> >>>>
> >>>> contains(t) is the number of balls in the vase at time t, and is equal
> >>>> to contains(n(t))=contains(floor(-1/t))=9*floor(-1/t).
> >>>>
> >>>> Lim(t->-0: 9*floor(-1/t)))=oo. The sum diverges in the limit.
> >>> You seem to be agreeing with what I wrote, i.e., that you say that the
> >>> number of balls in the vase at noon is lim_{t -> 0-} V(t). Care to
> >>> confirm this?
> >> No that's a bad formulation. I gave you the correct formulation, which
> >> states the number of balls in the vase as a function of t.
> >
> > Let's try some numbers.
> >
> > t = -1, 9*floor(-1/t) = 9, V(t) = 9.
> > t = -1/2, 9*floor(-1/t) = 18, V(t) = 18.
> >
> > Looks to me like V(t) = 9*floor(-1/t) for t < 0. So,
> >
> > lim_{t->0-) 9*floor(-1/t) = lim_{t->0-} V(t).
> >
> > So, it does seem that what I said you are saying is what you are saying.
>
> Oh. If you express V(t) that way, that looks correct. I thought it was
> different before.

You can see the definition of V above:

V(t) := sum_n B_n(t).

The fact that V(t) = 9*floor(-1/t) for t < 0 is a theorem.

Do you now agree that you are saying that the number of balls in the
vase at noon is lim_{t -> 0-} V(t)?

If yes, my next question is which of the following are you saying?

1. Definition. "Number of balls in the vase at noon" means
lim_{t -> 0-} V(t).

2. Theorem. Number of balls in the vase at noon equals
lim_{t -> 0-} V(t).

--
David Marcus
From: Lester Zick on
On Sat, 4 Nov 2006 12:35:26 -0500, David Marcus
<DavidMarcus(a)alumdotmit.edu> wrote:

>imaginatorium(a)despammed.com wrote:
>> Tony Orlow wrote:
>> > imaginatorium(a)despammed.com wrote:
>
>> > > "Bigulosity" has never been sufficiently clearly defined to tell, but
>> > > since you get very steamed up about subsets, and since the only known
>> > > coherent claim is that A proper subset of B -> b(A) < b(B), it's
>> > > extremely unlikely Bigulosity could be extended to become a total
>> > > ordering.
>> >
>> > Bigulosity is not based on the subset relation, but on formulaic
>> > mappings and infinite-case induction.
>>
>> Let me just point you to one of your problems here: do you understand
>> the normal set-theoretic definition of a "function" (or mapping)? (Go
>> look it up; I'm not going to type it here)
>>
>> The problem is that at school you learnt about "functions" - first
>> things like x+1, x^2+4x-7, which you later learnt are called
>> polynomials, then things like sin(x), and possibly sinh(x), Bessel
>> functions, and various other exotic varieties. I imagine these all fall
>> into the category of what you call "formulaic". But the general notion
>> of a mapping is unimaginably bigger than these very specific examples,
>> which were chosen (of course) because they are manageable. Any scheme
>> based on "formulaic" anything is not going to apply to almost all cases
>> (in some reasonable sense).
>
>Another aspect of this problem is that over the last couple of
>centuries, mathematicians realized that the best way to view functions
>is as mappings, not as formulas.

Is the best way the true way?

> This led to the modern definition of
>function. And, the fact that the function is sin, not sin(x) (the latter
>being a number). It seems Tony thinks the old way of viewing formulas as
>fundamental is better. I suppose it is possible that Tony has discovered
>something that a couple of centuries of mathematicians have missed, but
>until I can see a fleshed-out theory, I remain skeptical.
>
>> A more mundane problem is that in practice your approach to a problem
>> is to intuit the desirable answer, wave your hands, produce a
>> "formula", then start arguing. But anyway...
>
>This would seem to imply that it will be a while until we see a fleshed-
>out theory from Tony.

~v~~
From: imaginatorium on

David Marcus wrote:
> imaginatorium(a)despammed.com wrote:
> > Tony Orlow wrote:
> > > imaginatorium(a)despammed.com wrote:
>
> > > > "Bigulosity" has never been sufficiently clearly defined to tell, but
> > > > since you get very steamed up about subsets, and since the only known
> > > > coherent claim is that A proper subset of B -> b(A) < b(B), it's
> > > > extremely unlikely Bigulosity could be extended to become a total
> > > > ordering.
> > >
> > > Bigulosity is not based on the subset relation, but on formulaic
> > > mappings and infinite-case induction.
> >
> > Let me just point you to one of your problems here: do you understand
> > the normal set-theoretic definition of a "function" (or mapping)? (Go
> > look it up; I'm not going to type it here)
> >
> > The problem is that at school you learnt about "functions" - first
> > things like x+1, x^2+4x-7, which you later learnt are called
> > polynomials, then things like sin(x), and possibly sinh(x), Bessel
> > functions, and various other exotic varieties. I imagine these all fall
> > into the category of what you call "formulaic". But the general notion
> > of a mapping is unimaginably bigger than these very specific examples,
> > which were chosen (of course) because they are manageable. Any scheme
> > based on "formulaic" anything is not going to apply to almost all cases
> > (in some reasonable sense).
>
> Another aspect of this problem is that over the last couple of
> centuries, mathematicians realized that the best way to view functions
> is as mappings, not as formulas. This led to the modern definition of
> function. And, the fact that the function is sin, not sin(x) (the latter
> being a number).

I understand the point you are trying to make, but I think this, um,
tetchiness about notation is counterproductive. For a start, if sin(x)
is a number, please tell me its third decimal digit. If x is a number
then so is sin(x), but that's not much stronger than saying that IF
Big'un is a specific declared infinity, so is Big'un^2/7.

Of course there are many different ways of writing functions - I seem
to remember Dr Roseblade in Algebra I using postfix notation, which
might on reflection have been intended to help students like me see
that the notion of mapping in algebra is much more general than the
agglutination of manageable examples in school maths. And in a printed
book, fancy typography helps a lot in keeping things clear, whereas on
usenet, with my level of typos and whatnot, it seems to me that
anything reducing the possibility of confusion is a good thing. If I
write sin() to refer to the function whose name is sin, the only people
likely to be confused a Computation Literalists who assume that sin()
refers to the formal value returned by the sin function when it wasn't
given any arguments. Hmm.

> It seems Tony thinks the old way of viewing formulas as
> fundamental is better. I suppose it is possible that Tony has discovered
> something that a couple of centuries of mathematicians have missed, but
> until I can see a fleshed-out theory, I remain skeptical.

So do I. Here's a bit from elsewhere in this forum, where JSH is
whingeing about "artificial" functions:

>------- JSH --------
Well, made up "functions" are one thing but when you stick a variable
like x in with REAL functions you get functional behavior, like
continuity. Yeah, you can get creative with wacky functions, but
having actual variables in there with some kind of mathematical
expression, like with

a(x) = (7x - 1 + sqrt((7x-1)^2 -4*(49x^2 - 14x)))/2

means they have to BEHAVE like mathematical expressions. Notice in
arguments with posters they'll do weird things like say, well the
function equals 6 at this value and x+1 or something else at some other
value, as they sit there and deliberately make up something that can't
behave like

[da capo ad libitum]
>------- END JSH --------

Explicitly, according to JSH, "functional behavior" includes
continuity. Seems remarkably like Tony's argument.

Brian Chandler
http://imaginatorium.org

From: David Marcus on
imaginatorium(a)despammed.com wrote:
> David Marcus wrote:
> > imaginatorium(a)despammed.com wrote:
> > > Tony Orlow wrote:

> > > > Bigulosity is not based on the subset relation, but on formulaic
> > > > mappings and infinite-case induction.
> > >
> > > Let me just point you to one of your problems here: do you understand
> > > the normal set-theoretic definition of a "function" (or mapping)? (Go
> > > look it up; I'm not going to type it here)
> > >
> > > The problem is that at school you learnt about "functions" - first
> > > things like x+1, x^2+4x-7, which you later learnt are called
> > > polynomials, then things like sin(x), and possibly sinh(x), Bessel
> > > functions, and various other exotic varieties. I imagine these all fall
> > > into the category of what you call "formulaic". But the general notion
> > > of a mapping is unimaginably bigger than these very specific examples,
> > > which were chosen (of course) because they are manageable. Any scheme
> > > based on "formulaic" anything is not going to apply to almost all cases
> > > (in some reasonable sense).
> >
> > Another aspect of this problem is that over the last couple of
> > centuries, mathematicians realized that the best way to view functions
> > is as mappings, not as formulas. This led to the modern definition of
> > function. And, the fact that the function is sin, not sin(x) (the latter
> > being a number).
>
> I understand the point you are trying to make, but I think this, um,
> tetchiness about notation is counterproductive.

I will let my betters argue for my position: Have you read "How to write
mathematics" by Paul Halmos, in particular Section 14? Spivak's Calculus
is also quite scrupulous on this point, as are many other well-written
textbooks.

> For a start, if sin(x) is a number, please tell me its third decimal digit.

Be happy to. Just as soon as you tell me what x is.

> If x is a number
> then so is sin(x), but that's not much stronger than saying that IF
> Big'un is a specific declared infinity, so is Big'un^2/7.

I don't get the analogy. x is a number. What else could it be?

> Of course there are many different ways of writing functions - I seem
> to remember Dr Roseblade in Algebra I using postfix notation,

Algebraists sometimes use notation that makes the algebraic aspects of
functions clearer, e.g., making composition look notationally like
multiplication.

> which
> might on reflection have been intended to help students like me see
> that the notion of mapping in algebra is much more general than the
> agglutination of manageable examples in school maths. And in a printed
> book, fancy typography helps a lot in keeping things clear, whereas on
> usenet, with my level of typos and whatnot, it seems to me that
> anything reducing the possibility of confusion is a good thing. If I
> write sin() to refer to the function whose name is sin, the only people
> likely to be confused a Computation Literalists who assume that sin()
> refers to the formal value returned by the sin function when it wasn't
> given any arguments. Hmm.

Seems to me you'll have fewer typos if you type fewer characters!

If "sin()" is the name of the function, then its value at pi should be
written "sin()(pi)".

In advanced mathematics, often functions have values that are functions,
so the distinction between f and f(x) can be crucial, and using one when
you mean the other can change the meaning and/or seriously confuse the
reader. In all of mathematics, being precise and accurate fosters clear
thinking.

--
David Marcus
From: Ross A. Finlayson on
imaginatorium(a)despammed.com wrote:
....
>
> The minimum number of faces is obviously 4; not less than 3 faces must
> meet each vertex, and there must be more than one vertex. I make a
> little list of the number of possibilities:
>
> 4: 1 (tetrahedron)
> 5: 2 (square pyramid, triangular prism)
> 6: (pentagonal prism, cube, etc. at this point, cheat
> http://www.research.att.com/~njas/sequences/table?a=944&fmt=4 )
> 7: and so on
>
> Now I would notice that by "topologically distinct polyhedron" I am
> referring to a bounded geometrical object; not only bounded, but also
> discrete, in the sense that if I have one in my hand I know I can count
> the vertices. A bit of minor handwaving, and I would see that for any
> number of vertices there will be a limited number of possibilities for
> arranging that number of vertices into a polyhedron. So I know that I
> can pick an ordering scheme, and put all of the polyhedra in it. So I
> can "count" them, in the sense that I know that with my counting scheme
> there will not be a polyhedron that escapes it. I also notice that this
> counting sequence will never end, because there is no maximum to the
> number of vertices. After all, if there were, then given a polyhedron
> with that number of vertices I could simply pick any face, construct a
> pyramid on that face, and get a polyhedron with more than the supposed
> maximum number of vertices, which proves (by contradiction) that my
> sequence of polyhedra never ends. (This is all incredibly obvious, but
> I'm afraid I never know which incredibly obvious bits you still haven't
> grokked.)
>
> So with the constraints of the present audience, I would express this
> by saying that I can see I can "count" the polyhedra, a process which
> will account for every one of them, given enough time, but I can also
> see that the process of counting will never end.
>
> It's much messier, but I can do a similar thing for the polygons with
> vertices having integral x-y coordinates.
>
> How could I compare the two sets? I don't know - in both cases it's
> true that I can find a way of counting them, and that the counting will
> never end. Since it never ends, it's hard to see how I might think that
> the counting of the polygons was going to be over before the counting
> of the polyhedra, or vice versa. It's not even possible to say of a
> process that never ends that after so many counts the processs is "an
> appreciable way to completion", because there _is_ no completion.
>
> So my answer is rather limited: both sets are (ok, dammit, in normal
> words) countably infinite; there's no way obvious to me that I could
> regard either as "less" endlessly endless than the other.
>
> What's your answer? You appear to be the one claiming to have "numbers"
> for counting things when the counting never ends: do you have any here?
> Or are you happy to accept that these two sets, and the set of pofnats
> can all be put in 1-1 correspondence, or (in normal words again) have
> the same cardinality? Perhaps this particular fact raises no objections
> from your intuition module?
>
> Notice that of course there are other things one can say about these
> sets. For example, the number of polyhedrons with n edges [E(n)] is
> always less than the number with n vertices [V(n)] (= the number with n
> faces); I can see that E(n) < V(n-2), but "on average" these are
> close(?). Does the ratio V(n)/E(n) approach a particular value? I don't
> know.
>
> Brian Chandler
> http://imaginatorium.org

Maybe n-gonometry would help, towards n-k-hedrometry.

Re "countable uncountable", does it not seem that nested intervals must
apply to irrationals else it wouldn't?

Ross

First  |  Prev  |  Next  |  Last
Pages: 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934
Prev: integral problem
Next: Prime numbers