From: David Ching on
"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:e7Ft6ZyELHA.5700(a)TK2MSFTNGP04.phx.gbl...
> I did some checking.
>
[...]
>
> Its probably safe with threads in the same EXE, but needs to be confirmed
> with different processes appending to the same file.
>

Good detective work! :-)

-- David

From: Hector Santos on
Joseph M. Newcomer wrote:

> See below...
> On Wed, 23 Jun 2010 15:29:21 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>
>> I don't know Joe. You make me feel like everything I did was wrong. :)
>>
>> I'm was APL programmer, tranformation, black boxes, stacking of such
>> concept was the way I thought and if you do it all in one line, the
>> better.
> ****
> APL was not pure FP, because there was always output. As soon as you have a side effect,
> you move outside the original FP model.


FP by definition is either dyadic or monadic functional programming,
and nothing more. After 30 years, its news to me for anyone to
suggest the idea that it wasn't a "pure" functional programming
language which by itself isn't well defined or if you only want to
based it on a lambda adaptation or define only in more recent history
or research to the evolving FP paradigm. :)

> ***
>> The concept of Errors was secondary and an side issue really and
>> should be never be part of the overall FP framework. I mean, of
>> course, you have to design for it, but its a side issue when it comes
>> to FP.
> ****
> FP always made the assumption that input data was perfect, and algorithms were perfect,
> and therefore execution would always run to completion successfully.


Or to more exact, perfectly aligned "transformations."

> When APL hit an error, it just stopped dead. The code terminated, and told you why. Of
> course, being a write-only language, it was really hard to decipher what you were doing.


Well sure, maybe for most laymans but not for the APL purest. Most
good APL programmers thought out the solution first in their mind. I
always said it helped me as I learned other languages, I had an APL
mindset when I writing equivalent functionalities.

> [I often have said that the reason I passed one of the PhD qualifiers was that I found a
> bug in the APL example, proving the rho operator would fail with an error. I then said
> "You probably intended to write..." and showed the corrected code, and then I answered the
> question]
> ****


You are/was/were certainly a character. :)

>> And I guess even for native images there is an RTE - the OS and its
>> Doctor Watson recordings.
>>
>> So for me, as I am learning .NET (and I have used .NET here and there
>> in the past 10 years, but never as deeply as I am now), I think the
>> overall issue for me, is learning what the possible error conditions
>> with the rich .NET library. Because until you become an expert in it,
>> all you have to save you is using exception traps.
> ****
> I've been fairly generous with try/catch blocks in environments that resembled what .NET
> now is. It really is a pain.
> ****


In C/C++ code, I only used exceptions trap because of the
implementation of classes where you have no other choice.

In .NET, you really have no CHOICE :)

>> I like .NET, I think it really helps people in dealing with both code
>> syntax and also whats NOT possible or even if intellisense doesn't
>> tell you, its internal global catch all - will. :)
> ***
> Is Intellisense actually Intelli*sense* in .NET? Certainly in C++ it is at best
> Intelli*nonsense*. I've done some C# code, I like the language, but none of my clients
> have the slightest interest in it.
> joe


In all honestly, without IntelliSense in VS2010, I would be having a
much tougher time or less productive time looking up things.

I can't compare with VS98 and VS2005 because I never depended in it
before. I turned it off because it was awfully slow for me.

But in VS2010, I am truly impressed with the IDE and IntelliSense
helps in almost everything, including only exposing or not exposing
whats possible or not within block contexts and namespace. Its not
slow at all for me, and I really love how it automatically jumps to
the proper selection of constructors, variables, types, etc, that make
sense for the new instance, construct or blocks.

I have yet to see anything that has yield a negative impression or bad
rap for IntelliSense or VS2010 itself. The help is better, F1 gives
you a suggestion, but I use the speedy Chrome as my MSDN help. Its
wonderful. I'm having a blast with VS2010 and can't wait to begin
migrating my products over in earnest. Maybe then I may see issues.
But not yet. :)

--
HLS
From: Hector Santos on
David Ching wrote:

> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in message
> news:8445269je2c1ekr6ukuj6g0e5gcj9iv2ar(a)4ax.com...
>> while
>> you are writing, anyone else should be free to write in non-append
>> mode. So if I am
>> adding record 703, there is no reason to prevent anyone from
>> (re)writing records 0..702.
>> That's why file locks exist. And they can lock the range of record 703.
>
> Interesting, thanks. I hadn't thought of allowing other processes to
> rewrite existing data while only you could append new data. That seems
> like it would be rarely used, perhaps that is the comment you got about
> only 5% of the people needed to do that?

I think it was subjective comment because he was a C/C++ programmer
and did many server applications over the years where you appends to
shared log files, to me, that would be almost all of them who would
discover this immediate issue and need in .NET. But then again, maybe
server applications are only 5% of the market :)

Here's the thread:

http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/13ac128c-3988-49a5-b1e0-1da7ff7f05f4

To me, I think the decision was made to be "safe" as the simplest
approach if we consider that MS focused on security issues for .NET
and duplicating the FileShare.ReadWrite idea would require emulating
the same file locking behavior in StreamWriter would somewhat conflict
with that security idea. Easier to let the "legacy" developers be
explicit with the share mode to use. But its it really legacy? I did
find other threads where the programmer ran across the same issues and
needs.

Who knows? :)


--
HLS
From: Hector Santos on
Hector Santos wrote:

>> ***
>> Is Intellisense actually Intelli*sense* in .NET? Certainly in C++ it
>> is at best
>> Intelli*nonsense*. I've done some C# code, I like the language, but
>> none of my clients
>> have the slightest interest in it.
>> joe
>

Joe, miss this one.

So far, I all did was C#, a little VB.NET and a few C++.NET and
Intellisense was wonderful here. I beginning to really like C#.

I don't know yet what will happen when I begin to recompile my C/C++,
MFC products with VS2010.

The plan is to first keep the main RPC application server in C/C++ and
begin moving the RPC clients to .NET, especially the GUI clients.

I think MFC projects can be recompiled into .NET by adding Window
Forms, etc, or as soon as you add a reference to a .NET component, it
comes a .NET applet.

--
HLS
From: Goran on
On Jun 23, 9:58 pm, Hector Santos <sant9...(a)nospam.gmail.com> wrote:
> That is why I say the real issue is the lack of documentation or the
> "hidden" knowledge that are wrapped into classes.
>
> When you use fopen(), you know what the possible errors are, basically
>
>     invalid file path, error 2
>     not found, for a already exist mode, error 3
>     read/write sharing issue, error 5 or 32
>
> but regardless of the error code, the #1 idea is that the FILE *
> stream variable is NULL.
>
> Some of the things I am coming across with .NET is that some old
> traditional and SOLID concept no longer apply.

I disagree very much with this observation. The thing is, even fopen
has more failure modes than that (and they are probably OS-specific,
too). And fopen is a mighty simple, very low-level operation.

On "big" frameworks like .NET, one function hides much much more
functionality, and consequently, much more failure modes. So without
exceptions, you can either simplify failure info going out of the
function to stay in a "manageable" situation (effectively, lie),
either make it effectively unmanageable by specifying all failure
modes.

Here, case in point is a random Win32 API: it's BOOL fn(params), and
doc says: in case of failure, call GetLastError for more info. Seldom
it is defined what GetLastError might return. Why is that? Simply
because documenting all possible failure modes is mighty irrational.

Goran.