From: Hector Santos on
Joseph M. Newcomer wrote:

> See below...
> On Wed, 23 Jun 2010 19:48:26 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>
>> Joseph M. Newcomer wrote:
>>
>>> See below...
>>> On Wed, 23 Jun 2010 15:29:21 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>>>
>>>> I don't know Joe. You make me feel like everything I did was wrong. :)
>>>>
>>>> I'm was APL programmer, tranformation, black boxes, stacking of such
>>>> concept was the way I thought and if you do it all in one line, the
>>>> better.
>>> ****
>>> APL was not pure FP, because there was always output. As soon as you have a side effect,
>>> you move outside the original FP model.
>>
>> FP by definition is either dyadic or monadic functional programming,
>> and nothing more. After 30 years, its news to me for anyone to
>> suggest the idea that it wasn't a "pure" functional programming
>> language which by itself isn't well defined or if you only want to
>> based it on a lambda adaptation or define only in more recent history
>> or research to the evolving FP paradigm. :)
> ****
> I have not used APL since 1968. But while the core APL language was pureful functional,
> there is always that nasty little part where output was produced. And that's where the FP
> "side-effect-free" paradigm breaks down. That output is a side effect. LISP, a language
> I used for many years, was in its purest form FP, but I had to print output, and write to
> disk. Also, I had to manage internal state of my workspace, and that also violated the FP
> model (LISP suffered from a number of serious failures, not the least of which was the
> inability to specify modules and interfaces, so I had to build an entire source management
> system inside my workspace to handle the hundreds and hundreds of functions and impose
> some rationale on them)


I guess you have a point, but LISP, PROLOG also falls into the same
trap when you attempt to use it with human interfacing.

The purest form of FP, IMV, is when you can go from point A to Z with
each point being a transformation without as you say any human
interaction or output.

When using a stacking protocol today, we don't expect things to be
interrupted.

string b64 = s.ToLower().Replace("\n","").ToBase64();

Is there a failure point for ToLower() or Replace() or ToBase64()?

Possibly ToBase64(), but most of time no.

However, I did need a "Replace()" function where it return a condition
where things were replaced or not.

But geez, its been a while with playing APL Axioms that will blow
people away providing the old saying "if an APL Programmer could not
program the world in one line, he wouldn't program at all" or
something like that. :)

Overall, your point is well taken thought. It was pretty useful if you
could not interface with the end user. It had to be some pretty well
balanced vector and matrix transformations to have long stacked lines
without error and if there was error, an workspace exception was thrown.


>> Well sure, maybe for most laymans but not for the APL purest. Most
>> good APL programmers thought out the solution first in their mind. I
>> always said it helped me as I learned other languages, I had an APL
>> mindset when I writing equivalent functionalities.
> ****
> This did not change the fact that it was a write-only language, or that it was hard to
> read an APL program and deduce what it was trying to do. This is quite different from
> "thinking through" a program which worked for perfect data input.
> ****


Well, back then, with APL, I was still doing heavy engineering
modeling, heavy in DIFF EQ, Calculas, Finite Element analysis, etc.
Today, I wouldn't survive. :) I recall having a great appreciation
for APL for Finite Element work - very natural I felt.

>> In all honestly, without IntelliSense in VS2010, I would be having a
>> much tougher time or less productive time looking up things.
> ***
> Intellinonsense has several design defects. For example, it uses the raw API names (the A
> and W suffixed names) instead of the official API names; when I get to a bitfield, it
> gives me a UINT or DWORD instead of a list of options, and so on. I find this completely
> useless.
> ****


Well, you have something to compare. I don't because I never really
depended or use it before, I turned it off. Today, I think its great. :)

>> I have yet to see anything that has yield a negative impression or bad
>> rap for IntelliSense or VS2010 itself. The help is better, F1 gives
>> you a suggestion, but I use the speedy Chrome as my MSDN help. Its
>> wonderful. I'm having a blast with VS2010 and can't wait to begin
>> migrating my products over in earnest. Maybe then I may see issues.
>> But not yet. :)
> ***
> I found the VS2010 help to be among the worst products (if you can dignify anything this
> bad with the term "product") ever delivered; it clearly was designed by people who knew
> nothing about how end users actually used help. And the new UI that was released doesn't
> actually work. So I find the help system completely unusable.

The built-in help sucks, F1 takes you to a page with a recommended
link and so far it does give you the right page, but its slow.

But for the most part I am using Google Chrome - super fast searcher
and it seems to have focused on MSDN web sites. :) So I used CHROME
like I used the old MSDN CHM help and typing in the index to zoom in
on something. To me, its super fast enough and it has an excellent
book marking to keep me productive so I am good with the help.

--
HLS
From: Joseph M. Newcomer on
See below...
On Fri, 25 Jun 2010 18:13:48 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>Joseph M. Newcomer wrote:
>
>> See below...
>> On Wed, 23 Jun 2010 19:48:26 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>>
>>> Joseph M. Newcomer wrote:
>>>
>>>> See below...
>>>> On Wed, 23 Jun 2010 15:29:21 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>>>>
>>>>> I don't know Joe. You make me feel like everything I did was wrong. :)
>>>>>
>>>>> I'm was APL programmer, tranformation, black boxes, stacking of such
>>>>> concept was the way I thought and if you do it all in one line, the
>>>>> better.
>>>> ****
>>>> APL was not pure FP, because there was always output. As soon as you have a side effect,
>>>> you move outside the original FP model.
>>>
>>> FP by definition is either dyadic or monadic functional programming,
>>> and nothing more. After 30 years, its news to me for anyone to
>>> suggest the idea that it wasn't a "pure" functional programming
>>> language which by itself isn't well defined or if you only want to
>>> based it on a lambda adaptation or define only in more recent history
>>> or research to the evolving FP paradigm. :)
>> ****
>> I have not used APL since 1968. But while the core APL language was pureful functional,
>> there is always that nasty little part where output was produced. And that's where the FP
>> "side-effect-free" paradigm breaks down. That output is a side effect. LISP, a language
>> I used for many years, was in its purest form FP, but I had to print output, and write to
>> disk. Also, I had to manage internal state of my workspace, and that also violated the FP
>> model (LISP suffered from a number of serious failures, not the least of which was the
>> inability to specify modules and interfaces, so I had to build an entire source management
>> system inside my workspace to handle the hundreds and hundreds of functions and impose
>> some rationale on them)
>
>
>I guess you have a point, but LISP, PROLOG also falls into the same
>trap when you attempt to use it with human interfacing.
>
>The purest form of FP, IMV, is when you can go from point A to Z with
>each point being a transformation without as you say any human
>interaction or output.
>
>When using a stacking protocol today, we don't expect things to be
>interrupted.
>
> string b64 = s.ToLower().Replace("\n","").ToBase64();
>
>Is there a failure point for ToLower() or Replace() or ToBase64()?
>
>Possibly ToBase64(), but most of time no.
****
This is one of those great examples of pure functional models, and they work well.
Essentially, all FP works well if for all parameter values of a function, the parameters
meet the weakest precondition requirements. If they do not, the meaning of the function
is undefined, and you can do anything (stop the program, throw an exception, melt down the
CPU into a puddle of liquid silicon...). That is, the behavior is either well-defined or
completely undefined. I have no problem with that as a concept. But as a producer of
products, I have to do a little better than the compiler that says "You have a syntactic,
semantic, or pragmatic error somewhere in your source file" at the end. And that's where
things sort of break down.
****
>
>However, I did need a "Replace()" function where it return a condition
>where things were replaced or not.
>
>But geez, its been a while with playing APL Axioms that will blow
>people away providing the old saying "if an APL Programmer could not
>program the world in one line, he wouldn't program at all" or
>something like that. :)
****
I once saw a one-liner that computed the breakage point for DNA analysis (predicting the
sub-molecules). It was (the author proudly told me) something like 130 characters long,
The problem was that if the input wasn't perfect, it failed, and he didn't know why.
****
>
>Overall, your point is well taken thought. It was pretty useful if you
>could not interface with the end user. It had to be some pretty well
>balanced vector and matrix transformations to have long stacked lines
>without error and if there was error, an workspace exception was thrown.
****
I worked in Ken Iverson's original APL. Some later versions had concepts for handling
exceptions, but that one did not.
****
>
>
>>> Well sure, maybe for most laymans but not for the APL purest. Most
>>> good APL programmers thought out the solution first in their mind. I
>>> always said it helped me as I learned other languages, I had an APL
>>> mindset when I writing equivalent functionalities.
>> ****
>> This did not change the fact that it was a write-only language, or that it was hard to
>> read an APL program and deduce what it was trying to do. This is quite different from
>> "thinking through" a program which worked for perfect data input.
>> ****
>
>
>Well, back then, with APL, I was still doing heavy engineering
>modeling, heavy in DIFF EQ, Calculas, Finite Element analysis, etc.
>Today, I wouldn't survive. :) I recall having a great appreciation
>for APL for Finite Element work - very natural I felt.
****
Remember my characterization of APL: You hear a loud noise, you look down and see a hole
in your foot, but you can't remember enough linear algebra to figure out what happened.

When I was using APL, I was two years from a fairly intense linear algebra course. Today,
I'd have a lot more trouble making sense of it.
****
>
>>> In all honestly, without IntelliSense in VS2010, I would be having a
>>> much tougher time or less productive time looking up things.
>> ***
>> Intellinonsense has several design defects. For example, it uses the raw API names (the A
>> and W suffixed names) instead of the official API names; when I get to a bitfield, it
>> gives me a UINT or DWORD instead of a list of options, and so on. I find this completely
>> useless.
>> ****
>
>
>Well, you have something to compare. I don't because I never really
>depended or use it before, I turned it off. Today, I think its great. :)
***
I found that they had not fixed this in VS 2010. So it is still useless to me. But I
have two monitors, and the off-center monitor holds the active MSDN documentation.
***
>
>>> I have yet to see anything that has yield a negative impression or bad
>>> rap for IntelliSense or VS2010 itself. The help is better, F1 gives
>>> you a suggestion, but I use the speedy Chrome as my MSDN help. Its
>>> wonderful. I'm having a blast with VS2010 and can't wait to begin
>>> migrating my products over in earnest. Maybe then I may see issues.
>>> But not yet. :)
>> ***
>> I found the VS2010 help to be among the worst products (if you can dignify anything this
>> bad with the term "product") ever delivered; it clearly was designed by people who knew
>> nothing about how end users actually used help. And the new UI that was released doesn't
>> actually work. So I find the help system completely unusable.
>
>The built-in help sucks, F1 takes you to a page with a recommended
>link and so far it does give you the right page, but its slow.
>
>But for the most part I am using Google Chrome - super fast searcher
>and it seems to have focused on MSDN web sites. :) So I used CHROME
>like I used the old MSDN CHM help and typing in the index to zoom in
>on something. To me, its super fast enough and it has an excellent
>book marking to keep me productive so I am good with the help.
****
Unfortunately, this doesn't work for me if I'm at 35,000 feet over Utah, 39,000 feet over
the Atlantic Ocean, or in the British equivalent of East Podunk where Internet access
costs $35/day for something less robust and slower than 56K dialup. Or in class, with no
access at all (in some places, if I were to turn on my wireless, or plug into the local
Ethernet, I would, within minutes, have security people in the room demanding to know who
I was. In some contexts, they would show up with weapons drawn. And, as warned, I would
be in for several Really Miserable Hours. Military site security is SERIOUS. Site
security at multinationals is almost as serious, except the security people wear suits and
don't carry weapons, but you can lose a contract and find they called the local
police...they call it "industrial espionage", and that's pretty messy, too. These
situations are actually pretty common for me. It's why I cannot carry a cell phone with a
camera. Last summer I taught at the British equivalent of the NSA, and I had to go
through three armed-guard stations just to get to the UNCLASSIFIED area!).

joe
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
I have been told by any number of people "Debug print statements became obsolete with the
invention of the interactive debugger". This is nonsense. As you point out, these are
important. The key here is that debug print and logging statements express the issue in
terms of the problem domain, whereas the debugger expresses the issue in terms of the
implementation details. The debug print/logging can be used (sometimes selectively) in
the release product, but the debugger cannot be shipped to end users.

My Logging ListBox Control has the ability to either save the current log as a text file,
or to log-as-posted to a log file. I've been giving thought to coupling it to the Event
Log mechanism, but have done nothing yet. That control has saved us massive amounts of
effort in end-user tech support.
joe

On Fri, 25 Jun 2010 17:54:10 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>Goran wrote:
>
>> On Jun 24, 5:14 pm, Hector Santos <sant9...(a)nospam.gmail.com> wrote:
>>> Goran wrote:
>>>> Now, supposing that you have several places where you might get into a
>>>> particular situation, and after seeing the error in your log, you
>>>> don't know where that place is (if you will, you know the line, but
>>>> it's important t know where did you come from to it). Then, you have
>>>> two possibilities:
>>>> 1. have "debug" logging, turn it on and look there to see where things
>>>> start to fall apart. Combine with error log and there you are.
>>>> 2. "enhance" original error while unwinding the stack (add "context"
>>>> if you will).
>>> Or use trace tags. :)
>>
>> What's that? Honest question, I don't know, and google-fu fails me.
>
>
>A technique to pepper your code with functional labels for your own
>help or end-user support. I could explain many examples but the
>closest example is .NET Debug command:
>
> Debug.WriteLine(object value [,string category]);
>
>which if the category string is passed, it will display in the debug
>console:
>
> Catogory: value
>
>The category string is a trace tag for your quick dissemination, but
>you can employ it for release code to in log files, session
>information, etc, exception/error displays.
>
>I deem it a important programming and support concept. It helps :)
>
>Regarding the rest of your post, if I follow your main point, its all
>a matter of how you wish to implement your application or protocol.
>
>For example, this internet client/server protocol is a solid framework
>used for a long time, the dispatch handler expects one result
>
> true to continue
> false to force ending the connection
>
>But by tradition, most, if not ALL internet protocols ASSUME
>disconnects are client driven, i.e. QUIT.
>
>This is important for example the POP3 protocol which the standard
>says if the client does not issue the QUIT command, then the POP3
>server SHOULD assume an aborted session and SHOULD NOT update any mail
>pointers for mail that was downloaded. The expected explicit QUIT
>command informs the server to move into the update mode to update the
>user completed downloaded information. No QUIT, no Update.
>
>In the same vain, the server MUST NOT abort clients unless its a
>malicious or errant client in action. i.e, too many BAD command.
>
>So our protocol model is based on these long established functional
>protocol requirements.
>
>Now, that said, there are models where you can THROW an exception that
>the DISPATCHER catches.
>
>But you have to be careful for this because if any handler (or
>delegate) is doing a catch all, then this can throw off the flow.
>
>So in his case, it needs to be very explicit with specific exception
>traps so that others can fall thru the chain.
>
>In fact, the Thunderbird SMTP mail client had this very problem back
>in 2000 which I reported and help fixed after downloading the source code.
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=62836
>
>If I recall, TBIRD threw an exception to disconnect a session as a
>quick way to to "jump" to the dispatcher and end the session. It
>didn't bother to send the QUIT command. But if I recall, the logic
>was there to send it, it just never got to it because of the exception
>used to end the session. SMTP also "technically" requires a QUIT
>command to signify a completed session.
>
>If I recall, it was simple to just move some code around to make sure
>a QUIT was always sent before the session was disconnected.
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Hector Santos on
Joseph M. Newcomer wrote:

> I have been told by any number of people "Debug print statements became obsolete with the
> invention of the interactive debugger". This is nonsense. As you point out, these are
> important. The key here is that debug print and logging statements express the issue in
> terms of the problem domain, whereas the debugger expresses the issue in terms of the
> implementation details. The debug print/logging can be used (sometimes selectively) in
> the release product, but the debugger cannot be shipped to end users.


Right, I still have code for using OutputDebugString() and/or our own
DBGVIEW.

> My Logging ListBox Control has the ability to either save the current log as a text file,
> or to log-as-posted to a log file. I've been giving thought to coupling it to the Event
> Log mechanism, but have done nothing yet. That control has saved us massive amounts of
> effort in end-user tech support.
> joe


yes, I have a abstract class for this with virtual functions for
display output, storage, including optional event logging, log
rotations, macros to formats line output, etc. One of those solid
tools that stick around for a long time - that is course until MS
forces you to port things. :)

But that is what I like about .NET. So much of your own libraries are
now available or something similar via .NET.

What it doesn't have and I need is a ANSI terminal window :) I think I
might do this via Interop and COM so I can reuse my C/C++ classes here.

--
HLS