Prev: announce: my very first disassembler now available (GPL)
Next: Win32 non blocking console input?
From: Alexei A. Frounze on 12 Sep 2008 05:40 On Sep 12, 1:57 am, "Wolfgang Kern" <nowh...(a)never.at> wrote: > Rod Pemberton posted:... > > Look at how long and how many processors have had a carry flag. > > It's been there since the beginning of microprocessors, yet you can't > > easily check for integer overflow in C. > > Right, and I may only guess why flags didn't make it into HLLs, > perhaps much too complicated for those who only got a CS-degree ? :) Guys straight out of the college know too little these days, unless they're true geeks and they've gone far beyond what was given in class. By this same logic, for them we should've eliminated the complex type system in HLLs and only have kept one numerical type of a fixed bit size: an unsigned integer. Anything that involves signed values or floating-point numbers is usually too complicated or too hard to get right. :) Oh, and, of course, all pointers should've been removed too. You see, indirection is too complex to understand, let alone keep track of pointer values and array indices. Maybe the arrays should've been banned too for the sake of security. :) Now, what an easy thing it would be to learn to and program in a language, with only one data type (unsigned integer)! Then again, I've seen plenty of people with lots of programming experience (5+ years, full-time positions; with and without CS/EE/ECE degrees, even PhDs) who've done pretty stupid inexcusable things for someone with that much of experience. When people get the basics right, the quality of their software greatly improves. I mean, the bugs start to become more rare (fewer trivial, embarrassing and irritating bugs) and more related to the actual problems that the software is designed to solve, which makes them in a way more interesting too. Alex
From: Wolfgang Kern on 12 Sep 2008 06:20 Alexei A. Frounze remote read my brain: > Rod Pemberton posted:... >> Look at how long and how many processors have had a carry flag. >> It's been there since the beginning of microprocessors, yet you can't >> easily check for integer overflow in C. > Right, and I may only guess why flags didn't make it into HLLs, > perhaps much too complicated for those who only got a CS-degree ? :) I didn't know that you're are gifted with PSI :) You posted what I think. __ wolfgang <q Alex> Guys straight out of the college know too little these days, unless they're true geeks and they've gone far beyond what was given in class. By this same logic, for them we should've eliminated the complex type system in HLLs and only have kept one numerical type of a fixed bit size: an unsigned integer. Anything that involves signed values or floating-point numbers is usually too complicated or too hard to get right. :) Oh, and, of course, all pointers should've been removed too. You see, indirection is too complex to understand, let alone keep track of pointer values and array indices. Maybe the arrays should've been banned too for the sake of security. :) Now, what an easy thing it would be to learn to and program in a language, with only one data type (unsigned integer)! Then again, I've seen plenty of people with lots of programming experience (5+ years, full-time positions; with and without CS/EE/ECE degrees, even PhDs) who've done pretty stupid inexcusable things for someone with that much of experience. When people get the basics right, the quality of their software greatly improves. I mean, the bugs start to become more rare (fewer trivial, embarrassing and irritating bugs) and more related to the actual problems that the software is designed to solve, which makes them in a way more interesting too. Alex </q>
From: Rod Pemberton on 12 Sep 2008 19:22 "Alexei A. Frounze" <alexfrunews(a)gmail.com> wrote in message news:401da618-c8d4-482e-911b-cb5d4d827d0f(a)i24g2000prf.googlegroups.com... On Sep 12, 1:57 am, "Wolfgang Kern" <nowh...(a)never.at> wrote: > Rod Pemberton posted:... > we should've eliminated the > complex type system in HLLs and only have > kept one numerical type of a > fixed bit size: an unsigned integer. !!! Didn't know anyone else shared that belief... To me, the "entire world" seems opposed to it. > Anything that involves signed > values ... is usually too complicated or too > hard to get right. :) I've seen guys use signed. In fact, you provided a few situations at my request. But, many examples I've seen of C coding errors used a signed type. I.e., I always avoided it to keep my sanity... > Anything that involves ... > floating-point numbers is usually too complicated or too > hard to get right. :) I've never really had an important use for floating-point. I'm not saying they aren't needed. They are probably quite useful in financial, scientific, and graphing etc. applications. But, I since I don't/haven't done much of that I avoid them since they exhibit strange issues in C. > Oh, and, of course, all pointers should've been > removed too. If you remember our past conversations, I think you know I disagree. I understand that pointers do need to be eliminated for some valid reasons: compiler optimization, security, etc. But, IMO, pointers are the real strength to languages like C and PL/1. > [...] Humorous... > Then again, I've seen plenty of people with lots of programming > experience (5+ years, full-time positions; with and without CS/EE/ECE > degrees, even PhDs) who've done pretty stupid inexcusable things for > someone with that much of experience. I mentioned to someone, perhaps you, a while ago that sometimes I can't always figure out my own code at a later date. Different mindset. Different awareness of information. Different thoughts spent solving the problem versus rereading the code at a later date. Etc. You can't be considering every potential issue all the time. I.e., there is a limited set of conditions one checks to make sure are correct at any given point in time. At another point in time, those conditions may be different. And, therefore, an error is introduced. > When people get the basics > right, the quality of their software greatly improves. I mean, the > bugs start to become more rare (fewer trivial, embarrassing and > irritating bugs) and more related to the actual problems that the > software is designed to solve, which makes them in a way more > interesting too. Unless the entire application was written instantaneously by a single person, there can be unforeseen compound errors from work that was done at a later date. Every issue that was checked for, coded to prevent, in a prior stage of development might not be remembered or reread from comments at a later date. There have been many times I've said: "Why'd I do that? That's not how it would normally be done by me, so I clearly did that for a reason. Why didn't I leave a comment?" And, then I have to go tracking down why I did so... and once found add a comment. At some later date, the programmer had a different mindset, different problem to solve, and a new set of conditions to check, good night of sleep, bad night of sleep, family problems, cold medication, etc. The code has to be reviewed at various points in time sufficiently far apart that there won't be any recollection of what the code should be doing that will influence understanding what the code actually is doing. The code has to be thoroughly tested to find unknown issues. Even then, what you decided to test, and decided not to test, or didn't realize you should even attempt test, affects the results. In a major corporation, I'd think you'd want a separation of programmers and code reviewers. While the code reviewers may be working as programmers on some other application, you shouldn't allow the code reviewers to program on the application they are reviewing. That way when they read the code, they can say: "I really don't understand this...?" Sometimes, even with such testing, the issue won't present itself in an apparent manner for a very long time. This can be a major problem if there is a division of knowledge. The programmer is doing the programming, but the accountant, engineer, or stockbroker is checking the correctness of the program's actions or functions. The non-programmer will likely do a quick, but not a thorough review having much other "real" work to do. It's beneath their job description to test applications... I.e., they failed to notice the "bug" and the programmer is completely unaware of the "bug." Rod Pemberton
From: Alexei A. Frounze on 13 Sep 2008 03:37 On Sep 12, 4:22 pm, "Rod Pemberton" <do_not_h...(a)nohavenot.cmm> wrote: > "Alexei A. Frounze" <alexfrun...(a)gmail.com> wrote in messagenews:401da618-c8d4-482e-911b-cb5d4d827d0f(a)i24g2000prf.googlegroups.com... > On Sep 12, 1:57 am, "Wolfgang Kern" <nowh...(a)never.at> wrote: > > > Rod Pemberton posted:... > > we should've eliminated the > > complex type system in HLLs and only have > > kept one numerical type of a > > fixed bit size: an unsigned integer. > > !!! Didn't know anyone else shared that belief... To me, the "entire > world" seems opposed to it. It's not a belief, it's just an observation from practice. People don't know sh!t but pretend to and that leads to many problems. > > Anything that involves signed > > values ... is usually too complicated or too > > hard to get right. :) > > I've seen guys use signed. In fact, you provided a few situations at my > request. But, many examples I've seen of C coding errors used a signed > type. I.e., I always avoided it to keep my sanity... IMO (and I think I've already stated that) C has a few fundamental design flaws. I think I know the origins for some of them (C being high-levelish, somewhat portable ASM), but not everyone shares this knowledge and comes with different semi-(un)justified assumptions, which unfortunately happen to be wrong. C's arithmetic isn't your regular school arithmetic -- that's why signed and unsigned (and floating point) types are misused and poorly understood. > > Anything that involves ... > > floating-point numbers is usually too complicated or too > > hard to get right. :) > > I've never really had an important use for floating-point. I'm not saying > they aren't needed. They are probably quite useful in financial, > scientific, and graphing etc. applications. But, I since I don't/haven't > done much of that I avoid them since they exhibit strange issues in C. At least you're aware of potential issues in this area. I bet the vast majority isn't. That's in part for the same reasons you state: they start doing lots of f-p calculations with little experience and knowledge because their prior work didn't need them to deal with f-p and they didn't have formal (or at least self-directed) study of the matter -- they simply use inapplicable assumptions from their school math. > > Oh, and, of course, all pointers should've been > > removed too. > > If you remember our past conversations, I think you know I disagree. I > understand that pointers do need to be eliminated for some valid reasons: > compiler optimization, security, etc. But, IMO, pointers are the real > strength to languages like C and PL/1. Exactly, they're a powerful thing, which is dangerous when misused. :) > > [...] > > Humorous... > > > Then again, I've seen plenty of people with lots of programming > > experience (5+ years, full-time positions; with and without CS/EE/ECE > > degrees, even PhDs) who've done pretty stupid inexcusable things for > > someone with that much of experience. > > I mentioned to someone, perhaps you, a while ago that sometimes I can't > always figure out my own code at a later date. Different mindset. > Different awareness of information. Different thoughts spent solving the > problem versus rereading the code at a later date. Etc. You can't be > considering every potential issue all the time. I.e., there is a limited > set of conditions one checks to make sure are correct at any given point in > time. At another point in time, those conditions may be different. And, > therefore, an error is introduced. I'm particularly upset about the overflows and unchecked pointers, which don't require much of knowledge of the entire system and component interaction, just localized attention. > > When people get the basics > > right, the quality of their software greatly improves. I mean, the > > bugs start to become more rare (fewer trivial, embarrassing and > > irritating bugs) and more related to the actual problems that the > > software is designed to solve, which makes them in a way more > > interesting too. > > Unless the entire application was written instantaneously by a single > person, there can be unforeseen compound errors from work that was done at a > later date. Every issue that was checked for, coded to prevent, in a prior > stage of development might not be remembered or reread from comments at a > later date. There have been many times I've said: "Why'd I do that? That's > not how it would normally be done by me, so I clearly did that for a reason. > Why didn't I leave a comment?" And, then I have to go tracking down why I > did so... and once found add a comment. > > At some later date, the programmer had a different mindset, different > problem to solve, and a new set of conditions to check, good night of sleep, > bad night of sleep, family problems, cold medication, etc. The code has to > be reviewed at various points in time sufficiently far apart that there > won't be any recollection of what the code should be doing that will > influence understanding what the code actually is doing. The code has to be > thoroughly tested to find unknown issues. Even then, what you decided to > test, and decided not to test, or didn't realize you should even attempt > test, affects the results. It's all understandable. I'm just amazed at the amount of *trivial* bugs. > In a major corporation, I'd think you'd want a separation of programmers and > code reviewers. While the code reviewers may be working as programmers on > some other application, you shouldn't allow the code reviewers to program on > the application they are reviewing. That way when they read the code, they > can say: "I really don't understand this...?" Sure, having several people review the code is a good thing. But the separation shouldn't be total. You don't want people completely unfamiliar with the code to review it. I mean, they will have limited understanding of the code and therefore either won't go deep enough or will need significant amount of time. Likewise it's pretty useless to give the code for a review to somebody with poor skills. I've seen that happen. > Sometimes, even with such testing, the issue won't present itself in an > apparent manner for a very long time. This can be a major problem if there > is a division of knowledge. The programmer is doing the programming, but > the accountant, engineer, or stockbroker is checking the correctness of the > program's actions or functions. The non-programmer will likely do a quick, > but not a thorough review having much other "real" work to do. It's beneath > their job description to test applications... I.e., they failed to notice > the "bug" and the programmer is completely unaware of the "bug." It's true that programmers often implement things they don't fully understand. :) And if they are inattentive or the specs are ambiguous, there will be bugs, which may only be spot by those with the domain knowledge. Alex
From: Rod Pemberton on 13 Sep 2008 16:16
"Alexei A. Frounze" <alexfrunews(a)gmail.com> wrote in message news:5f7743d0-c325-459d-ab37-722011d28224(a)w1g2000prk.googlegroups.com... > On Sep 12, 4:22 pm, "Rod Pemberton" <do_not_h...(a)nohavenot.cmm> wrote: > > "Alexei A. Frounze" <alexfrun...(a)gmail.com> wrote in messagenews:401da618-c8d4-482e-911b-cb5d4d827d0f(a)i24g2000prf.googlegroups.co m... > > > Oh, and, of course, all pointers should've been > > > removed too. > > > > If you remember our past conversations, I think you know I disagree. I > > understand that pointers do need to be eliminated for some valid reasons: > > compiler optimization, security, etc. But, IMO, pointers are the real > > strength to languages like C and PL/1. > > Exactly, they're a powerful thing, which is dangerous when misused. :) Well, a coworker once said to me: "You're only as good as your tools." I learned that wasn't entirely true. Skills and knowledge have a large effect also. But, good tools sure do help. > I'm particularly upset about the overflows and unchecked pointers, > which don't require much of knowledge of the entire system and > component interaction, just localized attention. I think that whether to do so or not should be decided upon prior to implementing a project. Then the code is being checked along the way and all the programmers are using the same methods. On a large application, doing such checks could slow the application down to unacceptable levels. If implemented much later on, there is always a chance the code wasn't corrected properly or completely. Another individual recently indicated to me he was firm believer that the compiler should handle this... I argued that good programming, not relying on the compiler too much, was important too. I didn't tell him I also believe an important part of programming is placing your knowledge into code. > It's all understandable. I'm just amazed at the amount of *trivial* > bugs. Webpage? :) I posted this link a while back. Have you seen it? http://www.strauss.za.com/sla/code_std.html > > In a major corporation, I'd think you'd want a separation of programmers and > > code reviewers. While the code reviewers may be working as programmers on > > some other application, you shouldn't allow the code reviewers to program on > > the application they are reviewing. That way when they read the code, they > > can say: "I really don't understand this...?" > > Sure, having several people review the code is a good thing. But the > separation shouldn't be total. You don't want people completely > unfamiliar with the code to review it. I mean, they will have limited > understanding of the code and therefore either won't go deep enough or > will need significant amount of time. True. But, if they have a pre-existing understanding of the code, they might not recognize an error as an error. Their mindset has been biased towards correctness from their previous exposure to the code. Where is the balance point? > > Sometimes, even with such testing, the issue won't present itself in an > > apparent manner for a very long time. This can be a major problem if there > > is a division of knowledge. The programmer is doing the programming, but > > the accountant, engineer, or stockbroker is checking the correctness of the > > program's actions or functions. The non-programmer will likely do a quick, > > but not a thorough review having much other "real" work to do. It's beneath > > their job description to test applications... I.e., they failed to notice > > the "bug" and the programmer is completely unaware of the "bug." > > It's true that programmers often implement things they don't fully > understand. :) And if they are inattentive or the specs are ambiguous, > there will be bugs, which may only be spot by those with the domain > knowledge. > Which is worse? An programmer programming what they don't understand or a non-programmer domain expert attempting to program? Rod Pemberton |