From: Howard Brazee on
On Fri, 28 May 2010 12:20:35 -0700 (PDT), "robertwessel2(a)yahoo.com"
<robertwessel2(a)yahoo.com> wrote:

>But we generally *do* count the code in embedded systems, but it
>doesn't make all that much of a difference to the totals.

Which leads back to my original question of how we are totaling this.
If each chip is a program and we are counting programs, the total is
different than if we are counting lines of code.

Is "lines of code" a better measurement?

Or is it "unique programs"?

Is an object used by multiple applications to be counted once or
multiple times?

I doubt we will come to a meaningful agreement here.

--
"In no part of the constitution is more wisdom to be found,
than in the clause which confides the question of war or peace
to the legislature, and not to the executive department."

- James Madison
From: Anonymous on
In article <2k9a0616kaoqf9nm5s41fjo09h4r1kisrm(a)4ax.com>,
Howard Brazee <howard(a)brazee.net> wrote:
>On Tue, 1 Jun 2010 04:24:10 -0700 (PDT), Alistair Maclean
><alistair.j.l.maclean(a)googlemail.com> wrote:
>
>>If we can not get our definitions right, etc., then is it more a
>>wonder that our code ever works?
>
>Often times it doesn't matter that definitions can vary across the
>industry.
>
>If my user calls a file a database, and I understand what he means, I
>will work with him.

On the one hand, 'the meaning of a word is in its use', as Wittgenstein
put it... on the other hand, if by calling a file a database includes the
expectation that a file can be manipulated with the same amount of effort
as a database then some disappointment might result.

DD

From: SkippyPB on
On Tue, 01 Jun 2010 09:32:09 -0600, Howard Brazee <howard(a)brazee.net>
wrote:

>On Fri, 28 May 2010 12:20:35 -0700 (PDT), "robertwessel2(a)yahoo.com"
><robertwessel2(a)yahoo.com> wrote:
>
>>But we generally *do* count the code in embedded systems, but it
>>doesn't make all that much of a difference to the totals.
>
>Which leads back to my original question of how we are totaling this.
> If each chip is a program and we are counting programs, the total is
>different than if we are counting lines of code.
>
>Is "lines of code" a better measurement?
>
>Or is it "unique programs"?
>
>Is an object used by multiple applications to be counted once or
>multiple times?
>
>I doubt we will come to a meaningful agreement here.

Since a chip, and thus the "code", that resides on it can be mass
produced and it would be the same code every time, then it skews the
totals if they are counting each chip. I think the better method
would be unique programs and the lines in each of them to reach the
totals. If it even matters at all what these totals are.

Regards,
--

////
(o o)
-oOO--(_)--OOo-


"They say that patriotism is the last refuge
To which a scoundrel clings
Steal a little and they throw you in jail,
Steal a lot and they make you a king."
-- Bob Dylan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Remove nospam to email me.

Steve
From: robertwessel2 on
On May 29, 10:39 am, SkippyPB <swieg...(a)Nospam.neo.rr.com> wrote:
> On Fri, 28 May 2010 12:20:35 -0700 (PDT), "robertwess...(a)yahoo.com"
>
>
>
>
>
> <robertwess...(a)yahoo.com> wrote:
> >On May 28, 9:14 am, Howard Brazee <how...(a)brazee.net> wrote:
> >> On Thu, 27 May 2010 15:15:46 -0700 (PDT), "robertwess...(a)yahoo.com"
>
> >> <robertwess...(a)yahoo.com> wrote:
> >> >This statistic is often told as '80% of active code" or "80 percent of
> >> >the worlds data" or something like that.  One of those might have been
> >> >true in 1980, but now it's just BS.
>
> >> We would need to definition of both "code" and of "programs" that we
> >> could agree upon.    But it is clear that such definitions don't
> >> really apply.
>
> >> And certainly we aren't counting microcode in radios, stoplights,
> >> phones, etc.    
>
> >Very little of that is what's properly called microcode.  Most
> >embedded code runs on fairly conventional processors (quite small
> >processors, in some cases), and is not microcode.  Certainly most
> >stoplights don't have any "real" microcode (except what might be
> >embedded in the CPU), radios (including cell phones) might well have
> >some of the signal processing side driven by microcode, but the vast
> >majority of code the run is ordinary (the iPhone for example, is
> >basically a thin version of MacOS, Android is a Linux port).
>
> >But we generally *do* count the code in embedded systems, but it
> >doesn't make all that much of a difference to the totals.  Small
> >embedded systems tend to have relatively small amounts of code
> >(although they're sometimes deployed on very large numbers of devices
> >- whatever code Apple wrote for the iPod version X only counts once,
> >even if the did sell 50 million of them).  Larger embedded systems
> >tend to look a lot like any other programming environment (consider
> >the 3270 emulator on your iPhone - other than being targeted at a
> >fairly small platform, it's not written any differently than your 3270
> >emulator for your PC or Mac).  And while many embedded systems have
> >unusual requirements (realtime, reliability, etc.), the larger the
> >system, the more localized those requirements are.
>
> Here is the definition of microcode as stated by PC Magazine:
>
> Definition of: microcode
>
> A set of elementary instructions in a complex instruction set computer
> (CISC). The microcode resides in a separate high-speed memory and
> functions as a translation layer between the machine instructions and
> the circuit level of the computer. Microcode enables the computer
> designer to create machine instructions without having to design
> electronic circuits. Writing microcode is called "microprogramming,"
> and the microcode for a given computer is called a "microprogram."
>
> RISC computers do not use microcode, which is the reason why RISC
> compilers generate more instructions than CISC compilers.


At best that definition is seriously obsolete, and I'd say it would
have been too much of an oversimplification at any time.

Most current x86s, for example, decode and execute most instructions
without any microcode, but can split some complex instructions into
several smaller ops (arguably microcode-ish), and execute still others
with some sort of internal sequencer. And while there is a facility
(at lest in some x86s) to trap any opcode to microcode (so that a bug
can be patched in the field), it's not something that Intel (or AMD)
*want* to use.

On the flip side, many RISC processors can do exactly the same thing -
POWER7, for example, can both split certain instructions into a
sequence of several primitives (again, whether or not that's what we
traditionally called microcode is a point open to debate), and
*combine* certain (presumably common) sequences into a single "macro"
operation.

Nor does microcode really "enable(s) the computer designer to create
machine instructions without having to design electronic circuits."
While that sort of thing has been attempted once or twice (notably by
Burroughs), almost all real microcode implementations are very heavily
designed to support a specific ISA.

While it's certainly true that microcode, in the traditional sense of
directly controlling busses and gates inside the CPU, was more often
implemented in CISC devices, and avoidance of microcode was an early
justification for RISC, there were plenty of CISC processors
implemented without.

Nor is most current microcode much like what we traditionally called
microcode. Consider the "millicode" of several recent Z systems. On
the z10, for example, some three-quarters of the nearly 900
instructions are implemented in hardware (without microcode). The
other 25% are emulated in a special machine mode where most of the
"hardware" instructions are available (along with some special one for
that mode), as well as some additional registers, and the millicode
looks mostly like ordinary zSeries assembler. As a general concept,
traditional microcode doesn't really work in the context of
superscalar execution, or very deep pipelines. What is present on
(many of) those sorts of processors are ways to generate sequences of
internal-use-only instructions for a given ISA instruction, but those
logically (and physically) operate more like "real" instructions that
traditional microcode does.

But there is considerable fuzz in the definition of microcode. IBM
*used* to call almost any internal code in a device "microcode,"
whether or not it was microcode-like or instruction-like, until
licensing issues caused them to change the name to "Licensed Internal
Code." (The problem being that "microcode" is clearly part of a
device, and IBM pretty much had to supply the microcode to anyone
owning a bit of their hardware, whereas with LIC, IBM could separately
license the code from the hardware). As another example of the
general confusion, consider the code that implements LPARs on zSeries
machines – the vast majority is plain S/370 assembler (or the same
generated from a higher level language). Sure there’s a lot of
hardware associated with LAPR support, but there’s a whole bunch of
“LIC” as well.

And for a related discussion, see the innumerable threads about
whether or not PALcode on Alpha was, or was not, microcode.
From: robertwessel2 on
On Jun 1, 10:32 am, Howard Brazee <how...(a)brazee.net> wrote:
> On Fri, 28 May 2010 12:20:35 -0700 (PDT), "robertwess...(a)yahoo.com"
>
> <robertwess...(a)yahoo.com> wrote:
> >But we generally *do* count the code in embedded systems, but it
> >doesn't make all that much of a difference to the totals.
>
> Which leads back to my original question of how we are totaling this.
>  If each chip is a program and we are counting programs, the total is
> different than if we are counting lines of code.
>
> Is "lines of code" a better measurement?
>
> Or is it "unique programs"?
>
> Is an object used by multiple applications to be counted once or
> multiple times?


That's pretty silly - nobody counts code like that. That would lead
to absurdities like counting the code implementing CICS 10,000 times,
once for each mainframe CICS is running on. If Windows (or zOS) is 50
million lines of code, it's universally counted as 50 million lines,
regardless if it's running on a billion systems, ten thousand systems,
or on one system.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7
Prev: How to talk like a programmer
Next: Ping Warren Simmons