From: r norman on
On Wed, 06 Dec 2006 16:52:59 -0500, Joseph M. Newcomer
<newcomer(a)flounder.com> wrote:

(as he always does) extremely useful advice.

Many thanks.



From: Joseph M. Newcomer on
See below,.,.
On Wed, 06 Dec 2006 17:14:05 -0500, r norman <r_s_norman@_comcast.net> wrote:

>On Wed, 6 Dec 2006 13:52:19 -0800, "Tom Serface" <tserface(a)msn.com>
>wrote:
>
>>I'm actually making all new programs Unicode. It's not really that much
>>difference once you get into it (at least it hasn't been for us). You do
>>have to remember that a "char" is now more than one byte so things like
>>strlen() won't work as they did before. This article might help you some:
>>
>>http://msdn2.microsoft.com/en-us/library/805c56f8(VS.80).aspx
>>http://www.i18nguy.com/unicode/c-unicode.html
>>
>
>I have no problem with all Unicode. That is fairly easy by avoiding
>all "old-fashioned" strxxx" functions and using CString. It is my
>having to deal with protocols specifying ASCII chars that is the
>issue. One approach I am seriously considering is to put all hardware
>interface code with those needs into independent processes and write
>all user interface as a separate process purely in Unicode.
****
You don't need separate processes, and in fact this introduces a lot of complexity (I have
a Win16 app which is now 12 years old, and the way we got IT to run on Win32 was to write
a 32-bit "co-process" which handles serial I/O, networking, and speech output. It wasn't
a lot of fun, ultimately, but that system is still running supporting a few thousand
users.

I just isolate the 8-bit stuff into low-level communication modules, often DLLs. The real
design issue arises as to whether you ever see 8-bit characters outside the module/DLL.
These days, I favor doing the ANSI-to-T transformation inside the module so outside I can
rely on the data being in T-mode, more and more commonly Unicode these days. It's the
compelling reason I've been able to use to get my VS6 clients moved to VS.NET.
****
>That
>would also simplify the modifications as the hardware vendors modify
>their systems and change their protocols. I do have a pretty good
>separation of modules within my code but separate processes would help
>greatly. Then all the Unicode-ASCII conversions would occur at one
>point: the interprocess communication system.
****
DLLs should work just as well, and avoid the complexities of separate processes. That's
one of the reasons I now use T-mode exclusively; if I'm receiving CStringA data, I make
sure it is CString data before I send it out, and I accept CString data and convert it to
CStringA internally before sending it to the device. A couple #ifdefs make this very
efficient when T==A.
****
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: r norman on
On Wed, 06 Dec 2006 23:07:01 -0500, Joseph M. Newcomer
<newcomer(a)flounder.com> wrote:

>See below,.,.
>On Wed, 06 Dec 2006 17:14:05 -0500, r norman <r_s_norman@_comcast.net> wrote:
>
>>On Wed, 6 Dec 2006 13:52:19 -0800, "Tom Serface" <tserface(a)msn.com>
>>wrote:
>>
>>>I'm actually making all new programs Unicode. It's not really that much
>>>difference once you get into it (at least it hasn't been for us). You do
>>>have to remember that a "char" is now more than one byte so things like
>>>strlen() won't work as they did before. This article might help you some:
>>>
>>>http://msdn2.microsoft.com/en-us/library/805c56f8(VS.80).aspx
>>>http://www.i18nguy.com/unicode/c-unicode.html
>>>
>>
>>I have no problem with all Unicode. That is fairly easy by avoiding
>>all "old-fashioned" strxxx" functions and using CString. It is my
>>having to deal with protocols specifying ASCII chars that is the
>>issue. One approach I am seriously considering is to put all hardware
>>interface code with those needs into independent processes and write
>>all user interface as a separate process purely in Unicode.
>****
>You don't need separate processes, and in fact this introduces a lot of complexity (I have
>a Win16 app which is now 12 years old, and the way we got IT to run on Win32 was to write
>a 32-bit "co-process" which handles serial I/O, networking, and speech output. It wasn't
>a lot of fun, ultimately, but that system is still running supporting a few thousand
>users.
>
>I just isolate the 8-bit stuff into low-level communication modules, often DLLs. The real
>design issue arises as to whether you ever see 8-bit characters outside the module/DLL.
>These days, I favor doing the ANSI-to-T transformation inside the module so outside I can
>rely on the data being in T-mode, more and more commonly Unicode these days. It's the
>compelling reason I've been able to use to get my VS6 clients moved to VS.NET.
>****
>>That
>>would also simplify the modifications as the hardware vendors modify
>>their systems and change their protocols. I do have a pretty good
>>separation of modules within my code but separate processes would help
>>greatly. Then all the Unicode-ASCII conversions would occur at one
>>point: the interprocess communication system.
>****
>DLLs should work just as well, and avoid the complexities of separate processes. That's
>one of the reasons I now use T-mode exclusively; if I'm receiving CStringA data, I make
>sure it is CString data before I send it out, and I accept CString data and convert it to
>CStringA internally before sending it to the device. A couple #ifdefs make this very
>efficient when T==A.
>****

(I prefer bottom posting)

Yes, yours is a much easier method. Separate processes involve
ensuring that all them get started properly and then that they all get
stopped properly, especially if one of them develops a problem (not
that any of my programs ever has problems!) not to mention having a
more intricate interprocess communication method.

Thanks, again.


From: Mihai N. on
> You're right. All of those macros (and there seem to be a ton of them) are
> handy once you divine the mystery of their names :o)
You just have to learn the rules:
http://msdn2.microsoft.com/en-us/library/87zae4a3(VS.80).aspx

He wanted "LPCTSTR to char *", meaning T to A = T2A :-)

--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
From: Mihai N. on
> One approach I am seriously considering is to put all hardware
> interface code with those needs into independent processes and write
> all user interface as a separate process purely in Unicode. That
> would also simplify the modifications as the hardware vendors modify
> their systems and change their protocols. I do have a pretty good
> separation of modules within my code but separate processes would help
> greatly. Then all the Unicode-ASCII conversions would occur at one
> point: the interprocess communication system.

Good software engineering practices are often helping internationalization.
It sounds like a very good solution, I would vote for that
(especially since I don't have to do the work :-)


--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email