From: Albert D. Kallal on
"James A. Fortune" <CDMAPoster(a)FortuneJames.com> wrote in message
news:d0d8611c-ae7d-4be1-a9eb-441cee7e15ff(a)x21g2000yqa.googlegroups.com...


RDL seems reminiscent of web servers requiring IIS, web
> applications that require Internet Explorer, and HTML created by Word
> that was beyond hideous.

RDL is just the Report Definition Language for SQL server reporting
services.

I mean what you suggest for web based reporting systems right now then?

For access, since we're using SQL server reporting services, then the
results render just fine in FireFox, or safari.

No activeX, or no silverlight or anything else except a standard web browser
is required here.

I am quite hard pressed to find something more utility and useful then SQL
server reporting services?

Perhaps you have some other suggestion as to what you use for your reporting
platform. Using SQL server web reporting services seemed like a rather great
choice by the access team to utilize here.

SQL server reporting services is in widespread use. SQL server reporting
services will convert your report into Word with PERFECT rendering including
all graphics. And, same for Excel, and also it to PDF. You can even do some
very decent report printing from inside of most browsers, and if you done
any web reporting, building and setting up reports for the web can painfull,
but with SQL server reporting services, it's a pretty slick setup..

So, SQL server reporting services can quite much be consumed by standard
browsers from my iPad, to even Ubuntu on a linux box.

> Right now, to me "Cloud Computing" is
> equivalent to "Slow Computing."

Well, you can run your own private cloud if you want. However, the instant I
state cloud computing, you know several things:

You'll know that the applications going to be massively scalable
horizontally (large numbers of users)

You'll know the design bits and parts are future proof, since Microsoft and
the rest the computing industry is pushing cloud computing in a big way.

I could write on ad nauseam here and put out another 200 issues as to why
this one statement is significant, but I'll let you make the conclusions as
to what this means for the future of the product (it spells good news for
access).

I mean when access first came out it separated the coding system and the
database engine (DAO) into two parts, that design decision 10 years later
had significant impact on the future of access as a product. That design
architecture meant that we're not dealing with record numbers on a
conceptual level at the application programming level (contrast that to
something like Foxpro or other products of the day where record numbers were
very much used and you dealt with data very much like punched cards in
order - order of data did matter back then!).

It was a strategic long-term decision that would bear fruit many years down
the road.

Today, we now just witnessing the 10th anniversary of .net. The design
decisions they make 10 years ago are REALLY starting to look good today.
The issues of different processors, multiple processors, the transition to
64 bit computing, web based technologies, security etc are all seamless
issues due to .net having been adopted.

So, I am just explaining to you that there's architectural decisions that
will help Access 5 or 10 years down the road from now, and that is
important, just like the decision to split out the database engine, and
adopt a design architecture that was going to be suitable for client server
database applications when access first came out. Back then we did not have
any/many client server options much to speak about.

A good number the database products from that era when access came out don't
exist today, and the reason is many struggled to work correctly with client
server architectures. The fact that access from day one adopted SQL as its
main database language behind the scenes was another strategic long-term
advantage.

So when I speak that Access web is built around architectures that will
support cloud computing, I'm talking for 10 years down the road when it'll
become clear to most people what these advantages are. So, while I can see
them now, and just like I saw the advantages of Access and how JET worked
with the product, it was clear these choices would yield fruit 10 years down
the road. So, there are significant architectural decisions being made now,
that will change how you view the product and what you do in five are 10
years from now.

> Even with
> SQL Server's capabilities, bound data outside a LAN doesn't seem to be
> a good idea.

Right, and that's why the access data model is a disconnected replicated
model when uses those web services from the client.

So when you run bound forms in the access client now, it's a disconnected
model when using web services.

You can pull the network plug while you're running the application and it
will flip into off-line mode automatically and your application will
continue to run. When your connection is restored, the data starts to
synchronize and flow again. This also means the whole thing works on Wi-Fi
seamlessly, or even in those cases where your connection is intermittent.

>> Using Access to interface
> with SAP is considered to be too much of a security risk.

Unfortunately the ability during beta testing with access web services to
the above was cut on the web side.

As mentioned there's a good number of us there asking the access team to
reintroduce is feature, and not wait till the next version of access to give
us this ability to use the web site to connect to these external data
sources, it's a big feature we lost and we all wanted it badly. To me this
is the number one feature that we lost that would've really made such a huge
impact.

> I'd
> still love to find a way to use .NET to have Access take advantage of
> multiple client CPU cores

Well, access web is running all based on the latest servers and .net stuff,
so Access web services will most certainly take advantage of multiple cores.

And on the access client applications, I can imagine current processor
speeds are not enough. For the most part Access tends to be I/o bound and
other issues that slow things down anyway. I don't think we been CPU bound
in access for probably about 10 years now.


Albert K.

From: David W. Fenton on
"Albert D. Kallal" <PleaseNOOOsPAMmkallal(a)msn.com> wrote in
news:dGk5o.50882$0A5.1849(a)newsfe22.iad:

> I mean when access first came out it separated the coding system
> and the database engine (DAO) into two parts, that design decision
> 10 years later had significant impact on the future of access as a
> product. That design architecture meant that we're not dealing
> with record numbers on a conceptual level at the application
> programming level (contrast that to something like Foxpro or other
> products of the day where record numbers were very much used and
> you dealt with data very much like punched cards in order - order
> of data did matter back then!).

I've think you've misidentified the reason why Access/Jet lacked row
numbers. It's not because of decoupling of database engine and
programming environment, but because database interaction is via
SQL, and by definition, SQL has no row numbers because it is
set-based retrieval of data, and also by definition, the actual
storage of the data is irrelevant to the set-based retrieval of it.

I'm glad they made the choice to go with SQL, as it was a fairly new
thing to do at that point in the desktop database world. SQL may be
ubiquitous now, but it was only found in big-iron databases back
then, so far as I'm aware.

--
David W. Fenton http://www.dfenton.com/
contact via website only http://www.dfenton.com/DFA/
From: Albert D. Kallal on
"David W. Fenton" <NoEmail(a)SeeSignature.invalid> wrote in message
news:Xns9DC7D44C8C847f99a49ed1d0c49c5bbb2(a)74.209.136.88...

> I've think you've misidentified the reason why Access/Jet lacked row
> numbers. It's not because of decoupling of database engine and
> programming environment, but because database interaction is via
> SQL, and by definition, SQL has no row numbers because it is
> set-based retrieval of data, and also by definition, the actual
> storage of the data is irrelevant to the set-based retrieval of it.

Sure, I can go with the above. At the end of the day, that decision
would effect the product years down the road.

> I'm glad they made the choice to go with SQL, as it was a fairly new
> thing to do at that point in the desktop database world. SQL may be
> ubiquitous now, but it was only found in big-iron databases back
> then, so far as I'm aware.

Well, in fact my first expose to using SQL was with FoxPro DOS.
(I learned SQL on FoxPro). However, I was doing most of my
work on Pick systems back then, and that stuff was all query
based (but just not sql query language, but much very
similar). So, SQL on a PC was for me like a fish into water.

In other words, even FoxPro (1990) had started adopting SQL. And, yes
the "big" push for SQL back then was coming from IBM and the big
iron builders.

However, while FoxPro allowed SQL for data retrieval, it still
was hampered by its original design.

So, FoxPro + SQL was great for reports, as back then handling for
Relational data in Dbase like products was poor. You had
nothing to pull data from multiple tables together. Adding
SQL to FoxPro really did the trick, and I instantly loved it.

FoxPro was still saddled with that previous gen Architecture in
which record numbers and file based systems
was how the pc desktop systems worked.

While Access was file based also, the Architecture was advanced.
So, big news back then was Access had variable length records.
However, with variable length records, then both the concept
of fixed blocks of data on a file (and data order) thus did not
matter (in fact could not matter). And, when you go multi
user, then if someone deletes a record, then physical order
again can't matter again. All of these issues can be accomplished
without having adopted SQL.

So, sure, having to use/choose SQL to get data out is much the reason
here the data abstraction layer existed, but it still was an underlying
correct design anyway.

So, while FoxPro did in fact adopt and integrate sql, the general
designs of software for dBase compatible systems during that
period would have not supported moving the back end to an
sql server since much of the code etc. would have been still
using record numbers and seek() type of code.

This design decision is also why Access as a development product was able to
adopt ADO many years later. FoxPro was forever more tied to the data engine,
and the code such as browse or "while each record" etc. was really from a
file based era.

To be fair, MSFT did a great job extending FoxPro over the years to work
with client/server despite it roots. And another strategic change they make
a FoxPro with the adoption of a full OO development platform, these two
together really did turn FoxPro into a great development product. However
during some of these transitions into the new technologies, and one of them
being the first windows eddition of FoxPro, the product did stumble a bit at
the very time access arrived on the scene. Often sometimes historic baggage
can be an handicap.

And, today we see Access + JET with disconnected table support (SharePoint).
However, the local data store is still high performance JET tables.

And, if you are a .net developer, the transition to 64 bits is seamless. You
can also use the same .net IDE to write code for a smartphone, or
write code for a web site.

So, the fruits and advantages of .net code such as seamless transition to 64
bits, or writing code for smartphone could not be seen by most people 10
years ago.

The critical concept in the above is the investment in future proofing
technologies and architectures that are going to solve new business problems
that are clearly going to be the norm in future. (and cloud computing
whether we like it or not it's going to be one of those scenarios)

Unfortunately for Access, and VBA + win api code, is a pain to move to 64
bits. This issue is "mostly" due to us not having had a true pointer data
type.

For access 2010 we have a new version of VBA, and it does now have a pointer
data type (and also a new longlong data type). If we would had a pointer
data type from day one in access, then transition into the 64 bit windows +
API would've be far more easy. As it is now, the best advice I have is to
avoid API code for Access when possible.

Albert K.


From: David W. Fenton on
"Albert D. Kallal" <PleaseNOOOsPAMmkallal(a)msn.com> wrote in
news:PfE5o.56064$YX3.29616(a)newsfe18.iad:

[]

> So, FoxPro + SQL was great for reports, as back then handling for
> Relational data in Dbase like products was poor. You had
> nothing to pull data from multiple tables together. Adding
> SQL to FoxPro really did the trick, and I instantly loved it.

But that's really the point -- SQL was added on top of the existing
database engine, whereas with Jet, I'm pretty sure it's backed into
the low-level interfaces to the database engine.

[]

> This design decision is also why Access as a development product
> was able to adopt ADO many years later. FoxPro was forever more
> tied to the data engine, and the code such as browse or "while
> each record" etc. was really from a file based era.

I don't really see your point on ADO, because that's at a level
higher than the actual database engine itself, i.e., an abstraction
layer. And one of the reasons ADO doesn't work so well is precisely
because it's not designed around Jet. Access uses Jet in every way,
and DAO is built around Jet, so an ideal fit. ADPs didn't work very
well because you had to add more layers in between Access and SQL
Server than were there using Jet (and there are certainly more
layers than are involved with Access's native database engine).

I think the design of Jet was the real success, in that it was
created with ISAMs and ODBC support and a whole host of tools for
accessing data in other formats and presenting it within Access in
the same fashion so that all was accessible via SQL. This was as
much because VB needed a database layer as it was the needs of
Access users/developers, seems to me, and Access benefited from the
fact that MS's flagship development languages needed an easy data
access layer that could deal with just about any kind of data you
could think of.

ADO is just an abstraction layer like ODBC, seems to me, and really
there is nothing special about Access being able to use it, except
in that they wired it into some places like form/report recordsets.
But I also note that this is also one of the least reliable parts of
the ADO integration, as it's hard to predict exactly how things will
behave.

[]

> And, today we see Access + JET with disconnected table support
> (SharePoint). However, the local data store is still high
> performance JET tables.

I am very glad that they have made this decision instead of going
with some other kind of local data store because it means we get the
best of all possible worlds. That is, Jet gets extended for
compatibility with Sharepoint (table-level data macros!!! Huge!),
but at the same time remains as a usable desktop standalone database
engine.

As long as MS needs it for the local data store, it will continue to
be developed, which means we are going to continue to be able to use
it for standalone development (the meat and potatoes of my client
base, for instance).

What this means is that Microsoft's enterprise agenda this time
around *benefits* Access in ways that also benefit me and my client
base, something that did not happen at all the previous time MS made
huge changes to Access to reflect its enterprise agenda (i.e.,
A2000, with ADPs and ADO).

[]

> For access 2010 we have a new version of VBA, and it does now have
> a pointer data type (and also a new longlong data type). If we
> would had a pointer data type from day one in access, then
> transition into the 64 bit windows + API would've be far more
> easy. As it is now, the best advice I have is to avoid API code
> for Access when possible.

Well, it seems to me they've made it as easy as possible to adjust
so it will run under 64-bit, don't you think?

--
David W. Fenton http://www.dfenton.com/
contact via website only http://www.dfenton.com/DFA/
From: Albert D. Kallal on

"David W. Fenton" <NoEmail(a)SeeSignature.invalid> wrote in message
news:Xns9DC993EAB78E6f99a49ed1d0c49c5bbb2(a)74.209.136.90...

>> For access 2010 we have a new version of VBA, and it does now have
>> a pointer data type (and also a new longlong data type). If we
>> would had a pointer data type from day one in access, then
>> transition into the 64 bit windows + API would've be far more
>> easy. As it is now, the best advice I have is to avoid API code
>> for Access when possible.
>
> Well, it seems to me they've made it as easy as possible to adjust
> so it will run under 64-bit, don't you think?
>

Yes, we are very lucky here.

They added about 7 new functions, two or three new types and some
conditional compiling options to deal with this.

I still think the c++ developers will have a more easy transition to 64
bits because they used a true pointer type variable with win api's.

In access, we used the same data type (long) variable for that 32 bit
pointer and long values. For the c++ developer, most win 64 bit api calls
will work by simply changing the 32 bit pointer with a 64 bit pointer. Most
of the other values from win api remain 32 bits unless needed.

With access, since we used long for both pointers and long values, we can
not tell what the intention was for that variable and can't automatic
replace all pointers to 64 bits. We don't know if a variable is long, or it
is a pointer. So, it is hard to tell the difference between long and
pointer.

With a2010 and the new version of VBA (VBA 7), we get both a 64 bit long,
and ALSO a new pointer data type (lngPtr). So, now we can have a different
variable type based on the intention of the developer.

At the end of the day, we have the investment into the future of 64 bits

FoxPro, and VB6 folks don't have a 64 bit version of their platform, and
that going to be BIG problem down the road.

We access developers did get this investment. So, this thankfully this also
includes a 64 bit version of our JET engine (now called ACE).

This lack of 64 bit issue for others will show up more in 5 or so years down
the road as more applications become 64 bits since 32 bit apps can't
automate 64 bit applications. I not ready to jump to access (64), but at
least the path is open, and I will have to walk down it sooner or later....

Albert K.