From: Zeljko Vrba on
On 2009-12-23, Branimir Maksimovic <bmaxa(a)hotmail.com> wrote:
> Andrew wrote:
>> I am designing a system where an app will need to spawn a child thread
>> then the child and parent thread will need to communicate. If this was
>> in java I would use ConcurrentLinkedQueue but what to do in C++? I
>> have googled and searched boost but cannot find anything.
>>
I'm not very familiar with Java, but Boost.interprocess has message queues:

http://www.boost.org/doc/libs/1_41_0/doc/html/interprocess/synchronization_mechanisms.html#interprocess.synchronization_mechanisms.message_queue

>
> I would advise against using threads. Processes and shared memory is
> much more easier to maintain, no threading problems and performance
>
As soon as you establish shared memory between processes, you open the same
bag of problems as you do by just using threads. Except that shared memory
is in addition much clumsier to use...

>
> In java using threads is slower then using processes because
> it is faster to have one gc per thread then one gc per
> many threads.
> So in java using processes will always be faster then using
> threads because of gc which kills performance of threads anyway.
>
Any references to recent benchmarks that can support these claims?


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Chris M. Thomasson on
"Branimir Maksimovic" <bmaxa(a)hotmail.com> wrote in message
news:hgs21l$pm0$1(a)news.albasani.net...
> Andrew wrote:
>> I am designing a system where an app will need to spawn a child thread
>> then the child and parent thread will need to communicate. If this was
>> in java I would use ConcurrentLinkedQueue but what to do in C++? I
>> have googled and searched boost but cannot find anything.
>>
>> There is a class that would serve in ACE but ACE is huge so I do not
>> want to introduce ACE to the project. The project is already using
>> boost and fighting the battle for more boost usage is hard enough.
>>
>> Does anyone know if such a facility is planned for the upcoming std?
>
> I would advise against using threads. Processes and shared memory is
> much more easier to maintain,

I am curious as to what made you come to that conclusion? Anyway, which one
is easier: Creating a dynamic unbounded queue with threads or shared memory
and processes? With threads you can get this done rather easily using
pointers. For instance, with threads the nodes and queue anchor might look
like:
_____________________________________________________
struct node
{
struct node* next;
};


struct queue
{
struct node* head;
struct node* tail;
intra_process_mutex mutex;
};
_____________________________________________________




With processes you are probably going to need to use *offsets off a base
memory; the queue might look like:
_____________________________________________________
struct node
{
size_t next;
};


struct queue
{
size_t head;
size_t tail;
inter_process_robust_mutex mutex;
unsigned char* base;
};
_____________________________________________________




and you get at the actual nodes by adding the offsets (e.g., node::next,
queue::head/tail) to the base memory (e.g., queue::base) that the process
mapped the shared memory for the queue to. Also, with processes you might
need to handle the case in which a process dies in the middle of accessing
the queue. IMO, it's normal for a process to die in general, however, it's
NOT normal for a thread to just up and die. This is why there are such
things as so-called robust mutexs for process synchronization. You have to
deal and fix up possible data corruption from the queue data-structure being
left in an intermediate state by the dead process.Therefore, IMHO, threads
are easier for me to work with than multiple processes.


[*] - This is only true if you cannot guarantee that each process maps
memory at the _exact_ same base address.




> no threading problems and performance
> you gain from threads does not matter because there is always
> something slow like hard disk and database which will make
> no difference between processes and threads.

Shared memory and processes should be equal in performance to threads if you
can use pointers. With the offsets you need to perform an addition in order
to get at the node data-structure; something like:
_____________________________________________________
struct node*
queue_get_head(struct queue const* const self)
{
return (struct node*)(self->base + self->head);
}
_____________________________________________________




Ahh, but you have to setup a special offset value for NULL. Perhaps:
_____________________________________________________
#define NULL 0xFFFFFFFFU /* assume 32-bit system */


struct node*
queue_get_head(struct queue const* const self)
{
if (self->head == NULL) return NULL;

return (struct node*)(self->base + self->head);
}
_____________________________________________________




Why would you think that all that is easier than using threads? What am I
missing here?


Thanks.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Andrew on
On 24 Dec, 22:31, Zeljko Vrba <mordor.nos...(a)fly.srk.fer.hr> wrote:
> On 2009-12-23, Branimir Maksimovic <bm...(a)hotmail.com> wrote:> Andrew wrote:
> >> I am designing a system where an app will need to spawn a child thread
> >> then the child and parent thread will need to communicate. If this was
> >> in java I would use ConcurrentLinkedQueue but what to do in C++? I
> >> have googled and searched boost but cannot find anything.
>
> I'm not very familiar with Java, but Boost.interprocess has message queues:
>
> http://www.boost.org/doc/libs/1_41_0/doc/html/interprocess/synchroniz...

The stuff that Anthony Williams provided suits me better than the
boost message queue. Anthony's class is templated on the message type
which is just what I want. This is the way I am used to it working
from when I used a similar facility in ACE. I am using this class,
which the later modification Anthony made to support timed waits, and
it works just fine. Thanks again, Anthony!

Regards,

Andrew Marlow

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Branimir Maksimovic on
Zeljko Vrba wrote:
> On 2009-12-23, Branimir Maksimovic <bmaxa(a)hotmail.com> wrote:
>> Andrew wrote:
>>> I am designing a system where an app will need to spawn a child thread
>>> then the child and parent thread will need to communicate. If this was
>>> in java I would use ConcurrentLinkedQueue but what to do in C++? I
>>> have googled and searched boost but cannot find anything.
>>>
> I'm not very familiar with Java, but Boost.interprocess has message queues:
>
> http://www.boost.org/doc/libs/1_41_0/doc/html/interprocess/synchronization_mechanisms.html#interprocess.synchronization_mechanisms.message_queue
>
>> I would advise against using threads. Processes and shared memory is
>> much more easier to maintain, no threading problems and performance
>>
> As soon as you establish shared memory between processes, you open the same
> bag of problems as you do by just using threads. Except that shared memory
> is in addition much clumsier to use...

Hm, with shared memory you don;t get surprised when someone links in
library which is not thread safe...

>
>> In java using threads is slower then using processes because
>> it is faster to have one gc per thread then one gc per
>> many threads.
>> So in java using processes will always be faster then using
>> threads because of gc which kills performance of threads anyway.
>>
> Any references to recent benchmarks that can support these claims?

You don;t need references just common reason. GC is thread(s) which
has to scan memory for unreferenced heap. Since threads are
roaming through heap, you have to stop all threads in order
to examine heap,stack,bss etc...
which means while you are scanning memory threads are not working...
either that or you have to use lock for every access to any pointer
in application...
application should also lock every access to any pointer value...
So if GC has to release memory often, performance goes down....
And I can't see simpler and faster way to perform collection than
to stop program, perform collection in multiple threads, then
continue program...
Compacting collector does not have other choice because it has
to update all references in a program...

Greets

--
http://maxa.homedns.org/

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Branimir Maksimovic on
Chris M. Thomasson wrote:
> "Branimir Maksimovic" <bmaxa(a)hotmail.com> wrote in message
> news:hgs21l$pm0$1(a)news.albasani.net...
>> Andrew wrote:
>>> I am designing a system where an app will need to spawn a child thread
>>> then the child and parent thread will need to communicate. If this was
>>> in java I would use ConcurrentLinkedQueue but what to do in C++? I
>>> have googled and searched boost but cannot find anything.
>>>
>>> There is a class that would serve in ACE but ACE is huge so I do not
>>> want to introduce ACE to the project. The project is already using
>>> boost and fighting the battle for more boost usage is hard enough.
>>>
>>> Does anyone know if such a facility is planned for the upcoming std?
>>
>> I would advise against using threads. Processes and shared memory is
>> much more easier to maintain,
>
> I am curious as to what made you come to that conclusion? Anyway, which one
> is easier: Creating a dynamic unbounded queue with threads or shared memory
> and processes?

Depends. You can always use cout and simple pipe. Why queue?
When performance is concern vectorized operations on memory
parallel loops and such stuff have sense with threads.
There are many ways to do IPC... Depending on situation.
For example I had case when server version that is php which
with popen starts c executable which returns result with printf and
initialize data with every request, performs
three times faster than java multithreaded
server as search engine...

There is deque class in stdlib, it is good as queue, I use
it all the time...
OP can lock it with os mutex he have and that;s it...
Vector is also ok for this (push_back/back/pop_back), linked list etc...

I don;t see problem here. But since op asks this question,
probably he doesn;t know what is mutex...
That's why if he uses cout/pipe or sockets or something
else he will safe himself lot of maintenance problems...
> Why would you think that all that is easier than using threads? What am I
> missing here?

Maintenance problems. With processes, there is no problem.
For example there is pre forked and pre threaded version
of apache. People prefer forked version because of libraries
they have to link in. On my machine mt server serves
more than 60000 thousand simple echo requests per second
with 28000 connections on single cpu,
which is far too much , you get rarely more than 100 requests per
second...

Greets

For Op here is code for queue:
static std::deque<Service*> lstSvc_;
......
Mutex::Mutex()
{
pthread_mutex_init(&mutex_,0);
}
void Mutex::lock()
{
int rc=pthread_mutex_lock(&mutex_);
if(rc)throw Exception("mutex lock error:%s",strerror(rc));
}
void Mutex::unlock()
{
int rc = pthread_mutex_unlock(&mutex_);
if(rc)throw Exception("mutex unlock error:%s",strerror(rc));
}
Mutex::~Mutex()
{
pthread_mutex_destroy(&mutex_);
}

template <class Lock>
class AutoLock{
public:
explicit AutoLock(Lock& l):lock_(l)
{
lock_.lock();
}
void lock()
{
lock_.lock();
}
void unlock()
{
lock_.unlock();
}
~AutoLock()
{
lock_.unlock();
}
private:
AutoLock(const AutoLock&);
AutoLock& operator=(const AutoLock&);
Lock& lock_;
};
......

{
AutoLock<Mutex> l(lstSvcM_);
if(!lstSvc_.empty())
{
s= lstSvc_.front();
lstSvc_.pop_front();
}
else
{
more = false;
s=0;
continue;
}
}
.....

case Socket::Reading:
if(s->doneReading())
{
AutoLock<Mutex> l(lstSvcM_);
lstSvc_.push_back(s);
}
else
pl_.read(s);
break;

I don;t use condition variables, every thread performs both io and
service. so for waking other threads I use:
Poll::Poll(nfds_t size)
:fds_(new pollfd[size+1]),maxfds_(size),nfds_(0)
{
if(socketpair(AF_LOCAL,SOCK_STREAM,0,wake_)<0)
throw Exception("Poll init error: %s",strerror(errno));
int flags = fcntl(wake_[0], F_GETFL, 0);
fcntl(wake_[0], F_SETFL, flags | O_NONBLOCK); // set non blocking
flags = fcntl(wake_[1], F_GETFL, 0);
fcntl(wake_[1], F_SETFL, flags | O_NONBLOCK); // set non blocking
AutoLock<Mutex> l(lstPollM_);
lstPoll_.push_back(this);
}

void Poll::wake()
{
int rc = ::write(wake_[0],"1",1);
}
Hope this helps.

--
http://maxa.homedns.org/

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]