From: Rami on
I have a project in VC 6.0 which composed of a one main and several dll's.
I want to convert all the projects gradually to UNICODE. However, since its
a gradual process, through the debugging and integration, I may need to work
with ASCII as well.
Is it possible to do so?
Is there some constrain on the interface in this case(or some other)?

Regards
Rami


From: Ajay Kalra on
I would move all or nothing. It would be futile to keep some of it in
UNICODE and the rest in ASCII. It will be painful and a wasted effort to
keep both going at the same time. You should take the plunge and do the
whole thing in one shot.

Also, why are you converting to UNICODE? Is this a new requirement?

---
Ajay



"Rami" <rami(a)giron.com> wrote in message
news:uFWvSn3gIHA.6032(a)TK2MSFTNGP03.phx.gbl...
>I have a project in VC 6.0 which composed of a one main and several dll's.
> I want to convert all the projects gradually to UNICODE. However, since
> its
> a gradual process, through the debugging and integration, I may need to
> work
> with ASCII as well.
> Is it possible to do so?
> Is there some constrain on the interface in this case(or some other)?
>
> Regards
> Rami
>
>

From: Joseph M. Newcomer on
This would be a Bad Idea. You have to convert all of it to use Unicode. Note that you
can do part of the conversion gradually, but leave it as ANSI. For example, make sure all
uses of 'char' are replaced by 'TCHAR', all uses of 'char *' by 'LPTSTR' and all uses of
'const char *' by LPCTSTR. Make sure all "..." and '.' literals have _T() around them.
Replace all str... functions with their corresponding _tcs... functions. Watch out for
confusions caused by thinking strlen or .GetLength() tell you the number of bytes, and fix
them to do *sizeof(TCHAR) as part of the computation (and the inverse: byte counts
disguised as character counts). Ditto for sizeof(). When you have all the code converted
like this, then just build a new configuration in VS6 for Unicode Debug and one for
Unicode Release, build in the Unicode Debug and start testing. This is by far the best
approach. Don't try to convert one DLL to Unicode-only, and try to integrate it to an
ANSI app; that way madness lies.
joe

On Tue, 11 Mar 2008 15:14:27 +0200, "Rami" <rami(a)giron.com> wrote:

>I have a project in VC 6.0 which composed of a one main and several dll's.
>I want to convert all the projects gradually to UNICODE. However, since its
>a gradual process, through the debugging and integration, I may need to work
>with ASCII as well.
>Is it possible to do so?
>Is there some constrain on the interface in this case(or some other)?
>
>Regards
>Rami
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Rami on
Thanks,
Following your detailed answer I looked again in the projects settings and
saw that all of them have "_MBCS" preprocessor definition.
Should I delete it or just add the "UNICODE" definition?
Thanks
Rami

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in message
news:m95dt3l47s9gbigdkqj6qvv2jrlb61tgdi(a)4ax.com...
> This would be a Bad Idea. You have to convert all of it to use Unicode.
> Note that you
> can do part of the conversion gradually, but leave it as ANSI. For
> example, make sure all
> uses of 'char' are replaced by 'TCHAR', all uses of 'char *' by 'LPTSTR'
> and all uses of
> 'const char *' by LPCTSTR. Make sure all "..." and '.' literals have _T()
> around them.
> Replace all str... functions with their corresponding _tcs... functions.
> Watch out for
> confusions caused by thinking strlen or .GetLength() tell you the number
> of bytes, and fix
> them to do *sizeof(TCHAR) as part of the computation (and the inverse:
> byte counts
> disguised as character counts). Ditto for sizeof(). When you have all
> the code converted
> like this, then just build a new configuration in VS6 for Unicode Debug
> and one for
> Unicode Release, build in the Unicode Debug and start testing. This is by
> far the best
> approach. Don't try to convert one DLL to Unicode-only, and try to
> integrate it to an
> ANSI app; that way madness lies.
> joe


From: Ajay Kalra on
If you want UNICODE, you should not have MBCS or _MBCS at all in your
settings. You should use UNICODE and _UNICODE instead.

--
Ajay

"Rami" <rami(a)goren.org.ir> wrote in message
news:eNaHDp5gIHA.5164(a)TK2MSFTNGP03.phx.gbl...
> Thanks,
> Following your detailed answer I looked again in the projects settings and
> saw that all of them have "_MBCS" preprocessor definition.
> Should I delete it or just add the "UNICODE" definition?
> Thanks
> Rami
>
> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in message
> news:m95dt3l47s9gbigdkqj6qvv2jrlb61tgdi(a)4ax.com...
>> This would be a Bad Idea. You have to convert all of it to use Unicode.
>> Note that you
>> can do part of the conversion gradually, but leave it as ANSI. For
>> example, make sure all
>> uses of 'char' are replaced by 'TCHAR', all uses of 'char *' by 'LPTSTR'
>> and all uses of
>> 'const char *' by LPCTSTR. Make sure all "..." and '.' literals have
>> _T() around them.
>> Replace all str... functions with their corresponding _tcs... functions.
>> Watch out for
>> confusions caused by thinking strlen or .GetLength() tell you the number
>> of bytes, and fix
>> them to do *sizeof(TCHAR) as part of the computation (and the inverse:
>> byte counts
>> disguised as character counts). Ditto for sizeof(). When you have all
>> the code converted
>> like this, then just build a new configuration in VS6 for Unicode Debug
>> and one for
>> Unicode Release, build in the Unicode Debug and start testing. This is
>> by far the best
>> approach. Don't try to convert one DLL to Unicode-only, and try to
>> integrate it to an
>> ANSI app; that way madness lies.
>> joe
>
>