From: Brian Austin on
I understand that Microsoft bought FoxPro because they wanted to get
their hands on the Rushmore optimizer.

- Brian

On 3 Aug 2010 18:32:25 GMT, "David W. Fenton"
<NoEmail(a)SeeSignature.invalid> wrote:

>"Albert D. Kallal" <PleaseNOOOsPAMmkallal(a)msn.com> wrote in
>news:PfE5o.56064$YX3.29616(a)newsfe18.iad:
>
>[]
>
>> So, FoxPro + SQL was great for reports, as back then handling for
>> Relational data in Dbase like products was poor. You had
>> nothing to pull data from multiple tables together. Adding
>> SQL to FoxPro really did the trick, and I instantly loved it.
>
>But that's really the point -- SQL was added on top of the existing
>database engine, whereas with Jet, I'm pretty sure it's backed into
>the low-level interfaces to the database engine.
>
>[]
>
>> This design decision is also why Access as a development product
>> was able to adopt ADO many years later. FoxPro was forever more
>> tied to the data engine, and the code such as browse or "while
>> each record" etc. was really from a file based era.
>
>I don't really see your point on ADO, because that's at a level
>higher than the actual database engine itself, i.e., an abstraction
>layer. And one of the reasons ADO doesn't work so well is precisely
>because it's not designed around Jet. Access uses Jet in every way,
>and DAO is built around Jet, so an ideal fit. ADPs didn't work very
>well because you had to add more layers in between Access and SQL
>Server than were there using Jet (and there are certainly more
>layers than are involved with Access's native database engine).
>
>I think the design of Jet was the real success, in that it was
>created with ISAMs and ODBC support and a whole host of tools for
>accessing data in other formats and presenting it within Access in
>the same fashion so that all was accessible via SQL. This was as
>much because VB needed a database layer as it was the needs of
>Access users/developers, seems to me, and Access benefited from the
>fact that MS's flagship development languages needed an easy data
>access layer that could deal with just about any kind of data you
>could think of.
>
>ADO is just an abstraction layer like ODBC, seems to me, and really
>there is nothing special about Access being able to use it, except
>in that they wired it into some places like form/report recordsets.
>But I also note that this is also one of the least reliable parts of
>the ADO integration, as it's hard to predict exactly how things will
>behave.
>
>[]
>
>> And, today we see Access + JET with disconnected table support
>> (SharePoint). However, the local data store is still high
>> performance JET tables.
>
>I am very glad that they have made this decision instead of going
>with some other kind of local data store because it means we get the
>best of all possible worlds. That is, Jet gets extended for
>compatibility with Sharepoint (table-level data macros!!! Huge!),
>but at the same time remains as a usable desktop standalone database
>engine.
>
>As long as MS needs it for the local data store, it will continue to
>be developed, which means we are going to continue to be able to use
>it for standalone development (the meat and potatoes of my client
>base, for instance).
>
>What this means is that Microsoft's enterprise agenda this time
>around *benefits* Access in ways that also benefit me and my client
>base, something that did not happen at all the previous time MS made
>huge changes to Access to reflect its enterprise agenda (i.e.,
>A2000, with ADPs and ADO).
>
>[]
>
>> For access 2010 we have a new version of VBA, and it does now have
>> a pointer data type (and also a new longlong data type). If we
>> would had a pointer data type from day one in access, then
>> transition into the 64 bit windows + API would've be far more
>> easy. As it is now, the best advice I have is to avoid API code
>> for Access when possible.
>
>Well, it seems to me they've made it as easy as possible to adjust
>so it will run under 64-bit, don't you think?



.................................................................
Posted via TITANnews - Uncensored Newsgroups Access
>>>> at http://www.TitanNews.com <<<<
-=Every Newsgroup - Anonymous, UNCENSORED, BROADBAND Downloads=-

From: Access Developer on
"Brian Austin" <bbaustin1(a)att.net> wrote

> I understand that Microsoft bought FoxPro
> because they wanted to get their hands on
> the Rushmore optimizer.

They did take advantage of owning the Rushmore technology by including it in
the Jet database engine that Access used/uses by default. That doesn't mean
that was the only reason they bought FoxPro, and I don't think Microsoft has
(or does) normally elaborate on all facets of their business decisions.

They don't even tell us _why_ they make many of the design decisions that
are in every release of every product.

I can understand that there would be many reasons for not doing so.

--
Larry Linson, Microsoft Office Access MVP
Co-author: "Microsoft Access Small Business Solutions", published by Wiley
Access newsgroup support is alive and well in USENET
comp.databases.ms-access


From: James A. Fortune on
On Aug 1, 4:20 pm, "Albert D. Kallal" <PleaseNOOOsPAMmkal...(a)msn.com>
wrote:

> You'll know the design bits and parts are future proof, since Microsoft and
> the rest the computing industry is pushing cloud computing in a big way.
>
> I could write on ad nauseam here and put out another 200 issues as to why
> this one statement is significant, but I'll let you make the conclusions as
> to what this means for the future of the product (it spells good news for
> access).

Albert,

I am convinced that Microsoft has an overall plan. I am also
convinced that, as the spokesman for that plan, you have the requisite
loquacity to communicate that plan in detail. The disconnected
recordset is similar to the Groove technology acquired by Microsoft in
2005. Maybe it's more like binding to a temporary local table that
gets synch'ed back. I can't say I haven't used that technique. Is
SQL Server really the value proposition that Microsoft is trying to
make in all this? I think not. Microsoft has made its money by
relying on companies to take the easy way out most of the time. The
cloud will make it so that you don't have to have much of a network
anymore if you don't want to support one. You can have Microsoft
handle the software, the hardware, and everything in between. It's a
good strategy by Microsoft, and I don't blame them for pursuing it. I
don't mind so much because those aren't my kind of customers. But as
these services continue to be bundled up in attractive packages, even
the most self-reliant companies will consider some of them. Whether
in terms of price, options, flexibility, etc., I made my money by
providing the glue between what companies needed and what Microsoft
provided. Once I'm familiar enough with the new Access strategy, I
will concentrate on what needs to be done to make it flexible. Open
standards give me hope that the hooks Microsoft has built into Access
web will allow me to do that. But without a customer specifically
asking for Access web, I have to weigh my options carefully.

> Well, access web is running all based on the latest servers and .net stuff,
> so Access web services will most certainly take advantage of multiple cores.

I hope so.

> And on the access client applications, I can imagine current processor
> speeds are not enough. For the most part Access tends to be I/o bound and
> other issues that slow things down anyway. I don't think we been CPU bound
> in access for probably about 10  years now.

I/O bound issues will become even greater with cloud computing. The
amount of data allowed "on the wire" will continue to be an
increasingly onerous constraint. But there is still much to be gained
in certain situations by utilizing all that client-side parallel
computational power.

James A. Fortune
CDMAPoster(a)FortuneJames.com