From: Peter Crawford on
On Tue, 23 Jan 2007 23:12:25 -0800, David L Cassell
<davidlcassell(a)MSN.COM> wrote:

>david.johnson(a)CBA.COM.AU wrote back:
>>On Fri, 14 Jan 2005 14:56:18 -0800, David L. Cassell
>><cassell.david(a)EPAMAIL.EPA.GOV> wrote:
>>
>> >Curtis Amick <curtis(a)SC.RR.COM> wrote:
>> >> Got a difficult problem here. Recently my company upgraded network
>> >storage
>> >> to an EMC NAS (Network Attached Storage), from a non-NAS system. Now,
>> >those
>> >> of us who store SAS data sets on the network are encountering a
>> >serious
>> >> problem. When updating data sets, sometimes (rarely) those data sets
>> >will be
>> >> deleted. The error message looks like:
>> >> "ERROR: Rename of temporary member for (data set name) failed. File
>> >may
>> >> be found in a directory (your directory)"and the permanent data
set
>> >is
>> >> gone.
>> >>
>> >> This happens randomly, and (apparently) only when the data set
already
>> >> exists. That is, when doing like this:
>> >>
>> >> DATA NETDRIVE.DATASET; SET DATASET2; RUN; If netdrive.dataset
>> >already
>> >> exists (it's being "updated" by work.dataset2), then this error
>> >*might*
>> >> occur. If netdrive.dataset does not yet exist (it's being created by
>> >> work.dataset2), then problem will not occur.
>> >>
>> >> From SI Tech Support: They've seen this before (see SAS NOTE 005781,
>> >link
>> >> here: http://support.sas.com/techsup/unotes/SN/005/005781.html ), but
>> >can't
>> >> fix it because (according to TS rep) once SAS wants to write to NAS,
>> >they
>> >> "hand it off" to the network. And that's when the problem occurs.
>> >>
>> >> Here's what I think: When SAS updates a data set, it creates a
>> >temporary
>> >> data set to work on, keeping the original intact. When the step ends,
>> >(think
>> >> PROC SORT DATA=ND.dataset; RUN; (this killed me on Saturday. Had a
>> >macro
>> >> that sorted 20+ data sets, and lost 4!!! of them.)) the original data
>> >set is
>> >> over-written by the temp, taking on the name of the original. And I'm
>> >> thinking it's during that writing/re-naming process that the storage
>> >system
>> >> is losing our data sets. (SI calls it a "timing issue"). Doesn't
>> >happen when
>> >> working on local drives, and, like I mentioned earlier, hasn't
>> >happened yet
>> >> when *creating* permanent data sets; only when updating.
>> >>
>> >> Some suggestions (from SITS): change engines (v8, v612) (doesn't
work,
>> >not
>> >> feasible), use -SYNCHIO (have tried it; doesn't seem to help), remove
>> >SAS
>> >> data sets from on-line virus scanning in the NAS (our IS dept is
leery
>> >of
>> >> that one). Personally, I'd like to go back to previous storage
>> >(non-NAS, IS
>> >> dept isn't thrilled with that one).
>> >>
>> >> Probably can get around this problem by programming like so:
>> >> DATA ABC;
>> >> SET ND.DATASET;
>> >> (play with data set ABC...)
>> >> RUN:
>> >>
>> >> (delete ND.DATASET)
>> >>
>> >> DATA ND.DATASET;
>> >> SET ABC;
>> >> RUN:
>> >>
>> >> But I'd prefer something cleaner, less intrusive (especially for our
>> >less
>> >> "sophisticated" users). Plus, we've got LOTS of programs that are run
>> >daily,
>> >> weekly, monthly, etc that contain steps like: "proc sort
data=ND.xxxx;
>> >run;"
>> >> and/or "data ND.xxxx; set ND.xxxx abc; run;" and/or (well, you get
the
>> >> picture).
>> >>
>> >> To the point: has anyone else had this problem, and (if so) what did
>> >you do
>> >> to solve it?
>> >
>> >I haven't seen this problem before. But I'd just like to vent.
>> >
>> >How can your IS people not be responsive on this? Go to your bosses
and
>> >show them how EXPENSIVE this is going to be. If your IS won't or can't
>> >fix this problem (scrap NAS or get it fixed), then you and all your
>> >other
>> >SAS people will have to re-write every bit of your SAS code to only
>> >create
>> >new data sets: this means sorting from the old set to a new one using
>> >the
>> >OUT= option. This will explode the disk space requirements on the
>> >network,
>> >costing the company *more* money, on top of the cost of all the
>> >programmer
>> >hours to alter and then test and then debug all the SAS code. Make it
>> >into a
>> >business case, and show your bosses that this problem with NAS is going
>> >to
>> >cost them hundreds of thousands of dollars in this fiscal year alone,
as
>> >well
>> >as wrecking the schedule for any new programming projects (factor in
all
>> >costs for that as well).
>> >
>> >There is no excuse for your IS not to have EMC all over this. EMC has
a
>> >rep as a really responsive solutions provider, and I can't believe they
>> >got that rep by letting stuff like this happen.
>> >
>> >I wish I had better advice, but this isn't a SAS problem.
>> >
>> >David
>> >--
>> >David Cassell, CSC
>> >Cassell.David(a)epa.gov
>> >Senior computing specialist
>> >mathematical statistician
>
>>
>>Actually Dave, I'm not convinced that it isn't a SAS problem.
>>
>>I have a track open with SAS on a similar issue involving a rename from
>>the restructuring of a small data set. It is one within some hundreds of
>>data sets created and modified in a work library as part of 46 programs
>>included in a batch sequence.
>>
>>At irregular times, the "rename of temporary member" message comes up,
SAS
>>goes into syntax checking mode and sets Obs to 0, the batch manager
>>detects an error and terminates the sequence.
>>
>>It doesn't appear to be the same place twice, it isn't practical to
>>replace every data step with a new output table followed by a delete
step,
>>and it isn't an issue with space or permissions. So most of the usual
>>diagnoses are irrelevant.
>>
>>Synchio is turned on, and the tables use V7 compliant naming and
>>structure, so V6.12 library definition is out as well.
>>
>>It has been plaguing us for months, and seems very similar to the issue
>>described here by Curtis, and similar issues by other correspondents for
>>quite some time.
>>
>>The difference is that the work directory is on a virtual drive created
on
>>a Raid 5 array in a high end workstation. Yes, I know Raid slows
>>performance on work libraries, and it isn't my choice, but it's the way
>>this machine has been built, and I don't have the option to change it to
>>JBOD.
>>
>>The core issue seems to be: the V8 engine, when talking to a Network
>>drive, a NAS drive or a Raid array is expecting a process to be finished
>>before it has physically completed.
>>
>>How many times have we had to code delays into programs to deal with OS
>>response times? I have a hunch this is similar, and the SAS V8 engine is
>>expecting something Windows architecture cannot always deliver.
>>
>>
>>Incidentally, since it is irregular, and since it is a batch process with
>>small included code objects, I am looking at the batch manager
>>resubmitting the same code block if it fails with an error of this type.
>>Now I only need to be able to reset the SAS error flags. Unfortunately,
>>since I can't predict when the error will occur, it is going to be some
>>time before I will know if the changes to the batch manager work.
>>
>>Kind regards
>>
>>David
>
>I suspect that it *is* a SAS-related problem. But that does not make
>it a SAS problem. Right? Do you have other apps which sufficiently
>stress the disk I/O and buffering of the system? You might have to
>write one yourself in C, because SAS is pretty darn efficient at
read/write,
>and it may be overtaxing your system components.
>
>If nothing else - even highly tuned code to pump streams of data in
>and out of your I/O subsystems - can cause this problem, then I
>would have to point a finger at SAS. But if other high-end I/O apps
>can cause similar problems, then it's the system.
>
>Pinning this down may be a *major* pain in the NAS. :-)
>
>HTH,
>David
>--
>David L. Cassell


would it be worth looking for a "global" solution within SAS ?

If the problem does not occur when the data is not being replaced,
how about sas generation data sets?
Then the data never "replaces" the op.sys file name.

I presume there is some global option that could switch this on
to default to two generations. OK that is going to double
intermediate storage, but pending some work at SI, it might be
the solution needing least work.

For tidying up the unwanted data, I presume there will be some
way to delete all data not on the current generation.

At least it reduces the problem without rewriting a lot of code.





pity I can't find global system options like the dataset option
(genmax=2)
:-;]

Peter Crawford
From: "Johnson, David" on
Thank you Dave,

I first posted here on this subject last October, and noted then that
people have had the issue in one form or another for two years or more.

A thread that seems to recur is that the V8 engine is very quick. I
have never seen the problem on any of my Sun boxes, and they range from
three year old mid range architecture to considerably earlier.

I accept that an application should work within the constraints of its
host, and that pushing too hard is not a good ploy, and so I will not
point the finger.

Still, I will be happy to cover a large number of beer mats in a
Floridian bar with my reservations about the hardware and software on
which we run Windows. Of course, I will have to drink a glass off each
first, so while my reasons may become more insistent, they may become
less focussed <grin>

As to being a pain: I am not the first person to come in to try to deal
with these issues. A major complaint my predecessors had was the
stability of the platform. Things have improved markedly, but last
night reminded me not to be complacent.

Kind regards

David



-----Original Message-----
From: SAS(r) Discussion [mailto:SAS-L(a)LISTSERV.UGA.EDU] On Behalf Of
David L Cassell
Sent: Wednesday, 24 January 2007 6:12 PM
To: SAS-L(a)LISTSERV.UGA.EDU
Subject: Re: ERROR: Rename... Losing data sets from network drives

david.johnson(a)CBA.COM.AU wrote back:
>On Fri, 14 Jan 2005 14:56:18 -0800, David L. Cassell
><cassell.david(a)EPAMAIL.EPA.GOV> wrote:
>
> >Curtis Amick <curtis(a)SC.RR.COM> wrote:
> >> Got a difficult problem here. Recently my company upgraded network
> >storage
> >> to an EMC NAS (Network Attached Storage), from a non-NAS system.
> >> Now,
> >those
> >> of us who store SAS data sets on the network are encountering a
> >serious
> >> problem. When updating data sets, sometimes (rarely) those data
> >> sets
> >will be
> >> deleted. The error message looks like:
> >> "ERROR: Rename of temporary member for (data set name) failed. File
> >may
> >> be found in a directory (your directory)"and the permanent data
> >> set
> >is
> >> gone.
> >>
> >> This happens randomly, and (apparently) only when the data set
> >> already exists. That is, when doing like this:
> >>
> >> DATA NETDRIVE.DATASET; SET DATASET2; RUN; If netdrive.dataset
> >already
> >> exists (it's being "updated" by work.dataset2), then this error
> >*might*
> >> occur. If netdrive.dataset does not yet exist (it's being created
> >> by work.dataset2), then problem will not occur.
> >>
> >> From SI Tech Support: They've seen this before (see SAS NOTE
> >> 005781,
> >link
> >> here: http://support.sas.com/techsup/unotes/SN/005/005781.html ),
> >> but
> >can't
> >> fix it because (according to TS rep) once SAS wants to write to
> >> NAS,
> >they
> >> "hand it off" to the network. And that's when the problem occurs.
> >>
> >> Here's what I think: When SAS updates a data set, it creates a
> >temporary
> >> data set to work on, keeping the original intact. When the step
> >> ends,
> >(think
> >> PROC SORT DATA=ND.dataset; RUN; (this killed me on Saturday. Had a
> >macro
> >> that sorted 20+ data sets, and lost 4!!! of them.)) the original
> >> data
> >set is
> >> over-written by the temp, taking on the name of the original. And
> >> I'm thinking it's during that writing/re-naming process that the
> >> storage
> >system
> >> is losing our data sets. (SI calls it a "timing issue"). Doesn't
> >happen when
> >> working on local drives, and, like I mentioned earlier, hasn't
> >happened yet
> >> when *creating* permanent data sets; only when updating.
> >>
> >> Some suggestions (from SITS): change engines (v8, v612) (doesn't
> >> work,
> >not
> >> feasible), use -SYNCHIO (have tried it; doesn't seem to help),
> >> remove
> >SAS
> >> data sets from on-line virus scanning in the NAS (our IS dept is
> >> leery
> >of
> >> that one). Personally, I'd like to go back to previous storage
> >(non-NAS, IS
> >> dept isn't thrilled with that one).
> >>
> >> Probably can get around this problem by programming like so:
> >> DATA ABC;
> >> SET ND.DATASET;
> >> (play with data set ABC...)
> >> RUN:
> >>
> >> (delete ND.DATASET)
> >>
> >> DATA ND.DATASET;
> >> SET ABC;
> >> RUN:
> >>
> >> But I'd prefer something cleaner, less intrusive (especially for
> >> our
> >less
> >> "sophisticated" users). Plus, we've got LOTS of programs that are
> >> run
> >daily,
> >> weekly, monthly, etc that contain steps like: "proc sort
> >> data=ND.xxxx;
> >run;"
> >> and/or "data ND.xxxx; set ND.xxxx abc; run;" and/or (well, you get
> >> the picture).
> >>
> >> To the point: has anyone else had this problem, and (if so) what
> >> did
> >you do
> >> to solve it?
> >
> >I haven't seen this problem before. But I'd just like to vent.
> >
> >How can your IS people not be responsive on this? Go to your bosses
> >and show them how EXPENSIVE this is going to be. If your IS won't or

> >can't fix this problem (scrap NAS or get it fixed), then you and all
> >your other SAS people will have to re-write every bit of your SAS
> >code to only create new data sets: this means sorting from the old
> >set to a new one using the OUT= option. This will explode the disk
> >space requirements on the network, costing the company *more* money,
> >on top of the cost of all the programmer hours to alter and then test

> >and then debug all the SAS code. Make it into a business case, and
> >show your bosses that this problem with NAS is going to cost them
> >hundreds of thousands of dollars in this fiscal year alone, as well
> >as wrecking the schedule for any new programming projects (factor in
> >all costs for that as well).
> >
> >There is no excuse for your IS not to have EMC all over this. EMC
> >has a rep as a really responsive solutions provider, and I can't
> >believe they got that rep by letting stuff like this happen.
> >
> >I wish I had better advice, but this isn't a SAS problem.
> >
> >David
> >--
> >David Cassell, CSC
> >Cassell.David(a)epa.gov
> >Senior computing specialist
> >mathematical statistician

>
>Actually Dave, I'm not convinced that it isn't a SAS problem.
>
>I have a track open with SAS on a similar issue involving a rename from

>the restructuring of a small data set. It is one within some hundreds
>of data sets created and modified in a work library as part of 46
>programs included in a batch sequence.
>
>At irregular times, the "rename of temporary member" message comes up,
>SAS goes into syntax checking mode and sets Obs to 0, the batch manager

>detects an error and terminates the sequence.
>
>It doesn't appear to be the same place twice, it isn't practical to
>replace every data step with a new output table followed by a delete
>step, and it isn't an issue with space or permissions. So most of the
>usual diagnoses are irrelevant.
>
>Synchio is turned on, and the tables use V7 compliant naming and
>structure, so V6.12 library definition is out as well.
>
>It has been plaguing us for months, and seems very similar to the issue

>described here by Curtis, and similar issues by other correspondents
>for quite some time.
>
>The difference is that the work directory is on a virtual drive created

>on a Raid 5 array in a high end workstation. Yes, I know Raid slows
>performance on work libraries, and it isn't my choice, but it's the way

>this machine has been built, and I don't have the option to change it
>to JBOD.
>
>The core issue seems to be: the V8 engine, when talking to a Network
>drive, a NAS drive or a Raid array is expecting a process to be
>finished before it has physically completed.
>
>How many times have we had to code delays into programs to deal with OS

>response times? I have a hunch this is similar, and the SAS V8 engine
>is expecting something Windows architecture cannot always deliver.
>
>
>Incidentally, since it is irregular, and since it is a batch process
>with small included code objects, I am looking at the batch manager
>resubmitting the same code block if it fails with an error of this
type.
>Now I only need to be able to reset the SAS error flags.
>Unfortunately, since I can't predict when the error will occur, it is
>going to be some time before I will know if the changes to the batch
manager work.
>
>Kind regards
>
>David

I suspect that it *is* a SAS-related problem. But that does not make it
a SAS problem. Right? Do you have other apps which sufficiently stress
the disk I/O and buffering of the system? You might have to write one
yourself in C, because SAS is pretty darn efficient at read/write, and
it may be overtaxing your system components.

If nothing else - even highly tuned code to pump streams of data in and
out of your I/O subsystems - can cause this problem, then I would have
to point a finger at SAS. But if other high-end I/O apps can cause
similar problems, then it's the system.

Pinning this down may be a *major* pain in the NAS. :-)

HTH,
David
--
David L. Cassell
mathematical statistician
Design Pathways
3115 NW Norwood Pl.
Corvallis OR 97330

_________________________________________________________________
Get Hilary Duff's homepage with her photos, music, and more.
http://celebrities.live.com

************** IMPORTANT MESSAGE *****************************
This e-mail message is intended only for the addressee(s) and contains information which may be
confidential.
If you are not the intended recipient please advise the sender by return email, do not use or
disclose the contents, and delete the message and any attachments from your system. Unless
specifically indicated, this email does not constitute formal advice or commitment by the sender
or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries.
We can be contacted through our web site: commbank.com.au.
If you no longer wish to receive commercial electronic messages from us, please reply to this
e-mail by typing Unsubscribe in the subject line.
**************************************************************
From: Martin Gregory on
Possibly a long shot, but I once had a similar issue with files in the
WORK library on a client PC. I don't recall the exact message, but SAS
was not able to access file in WORK. It was also apparently random,
sometimes it would be a dataset created by our application, sometimes
one of the utility files created by SAS. It turned out that a network
backup program had been scheduled to run every 15 minutes (!) and it was
locking files in the WORK library while it was doing the back up.

Is it possible that something similar is going on? Does this NAS keep
snapshots? It might be doing some behind the scenes backing up in a not
very intelligent way.

-Martin

>> Curtis Amick <curtis(a)SC.RR.COM> wrote:
>> Got a difficult problem here. Recently my company upgraded network
>> storage
>> to an EMC NAS (Network Attached Storage), from a non-NAS system. Now,
>> those
>> of us who store SAS data sets on the network are encountering a
>> serious
>> problem. When updating data sets, sometimes (rarely) those data sets
>> will be
>> deleted. The error message looks like:
>> "ERROR: Rename of temporary member for (data set name) failed. File
>> may
>> be found in a directory (your directory)"and the permanent data set
>> is
>> gone.
>>
>> This happens randomly, and (apparently) only when the data set already
>> exists. That is, when doing like this:
>>
>> DATA NETDRIVE.DATASET; SET DATASET2; RUN; If netdrive.dataset
>> already
>> exists (it's being "updated" by work.dataset2), then this error
>> *might*
>> occur. If netdrive.dataset does not yet exist (it's being created by
>> work.dataset2), then problem will not occur.
>>
>> From SI Tech Support: They've seen this before (see SAS NOTE 005781,
>> link
>> here: http://support.sas.com/techsup/unotes/SN/005/005781.html ), but
>> can't
>> fix it because (according to TS rep) once SAS wants to write to NAS,
>> they
>> "hand it off" to the network. And that's when the problem occurs.
>>
>> Here's what I think: When SAS updates a data set, it creates a
>> temporary
>> data set to work on, keeping the original intact. When the step ends,
>> (think
>> PROC SORT DATA=ND.dataset; RUN; (this killed me on Saturday. Had a
>> macro
>> that sorted 20+ data sets, and lost 4!!! of them.)) the original data
>> set is
>> over-written by the temp, taking on the name of the original. And I'm
>> thinking it's during that writing/re-naming process that the storage
>> system
>> is losing our data sets. (SI calls it a "timing issue"). Doesn't
>> happen when
>> working on local drives, and, like I mentioned earlier, hasn't
>> happened yet
>> when *creating* permanent data sets; only when updating.
>>
>> Some suggestions (from SITS): change engines (v8, v612) (doesn't work,
>> not
>> feasible), use -SYNCHIO (have tried it; doesn't seem to help), remove
>> SAS
>> data sets from on-line virus scanning in the NAS (our IS dept is leery
>> of
>> that one). Personally, I'd like to go back to previous storage
>> (non-NAS, IS
>> dept isn't thrilled with that one).
>>
>> Probably can get around this problem by programming like so:
>> DATA ABC;
>> SET ND.DATASET;
>> (play with data set ABC...)
>> RUN:
>>
>> (delete ND.DATASET)
>>
>> DATA ND.DATASET;
>> SET ABC;
>> RUN:
>>
>> But I'd prefer something cleaner, less intrusive (especially for our
>> less
>> "sophisticated" users). Plus, we've got LOTS of programs that are run
>> daily,
>> weekly, monthly, etc that contain steps like: "proc sort data=ND.xxxx;
>> run;"
>> and/or "data ND.xxxx; set ND.xxxx abc; run;" and/or (well, you get the
>> picture).
>>
>> To the point: has anyone else had this problem, and (if so) what did
>> you do
>> to solve it?
From: "Johnson, David" on
Thank you Martin, I've suspected there may be hidden processes running
which is why I am trying to pin down the instances. Unfortunately they
are not predictable. Last night for instance 11:59:38 of processing
went through without any incident which means I don't have a probative
test of the resubmission changes I made. We'll see whether this holiday
weekend produces anything different.

A lot of clients now have outsourced IT infrastructure, and to manage
the process the outsourcing company often places pervasive and hidden
processes on the machines to monitor software, synchronise user data
between local and remote drives and lock down settings. This prevents
using some of the excellent tools that have been recommended for various
issues, and may also mean any of a number of covert processes is
conflicting with the batch.

Nobody has said it yet; but this is a workstation, not a server, and
batch processing should be done on the right platform, and if you get
conflicts from running batches on a workstation then sometimes you just
have to accept that or use the process to migrate the job to the
platform for which it is suited. Unfortunately, some regulatory
authorities might be unwilling to wait extra time for that migration to
be completed for delivery of their information.

Kind regards

David

-----Original Message-----
From: SAS(r) Discussion [mailto:SAS-L(a)LISTSERV.UGA.EDU] On Behalf Of
Martin Gregory
Sent: Thursday, 25 January 2007 3:49 AM
To: SAS-L(a)LISTSERV.UGA.EDU
Subject: Re: ERROR: Rename... Losing data sets from network drives

Possibly a long shot, but I once had a similar issue with files in the
WORK library on a client PC. I don't recall the exact message, but SAS
was not able to access file in WORK. It was also apparently random,
sometimes it would be a dataset created by our application, sometimes
one of the utility files created by SAS. It turned out that a network
backup program had been scheduled to run every 15 minutes (!) and it was
locking files in the WORK library while it was doing the back up.

Is it possible that something similar is going on? Does this NAS keep
snapshots? It might be doing some behind the scenes backing up in a not
very intelligent way.

-Martin

>> Curtis Amick <curtis(a)SC.RR.COM> wrote:
>> Got a difficult problem here. Recently my company upgraded network
>> storage to an EMC NAS (Network Attached Storage), from a non-NAS
>> system. Now, those of us who store SAS data sets on the network are
>> encountering a serious problem. When updating data sets, sometimes
>> (rarely) those data sets will be deleted. The error message looks
>> like:
>> "ERROR: Rename of temporary member for (data set name) failed. File
>> may
>> be found in a directory (your directory)"and the permanent data
>> set is gone.
>>
>> This happens randomly, and (apparently) only when the data set
>> already exists. That is, when doing like this:
>>
>> DATA NETDRIVE.DATASET; SET DATASET2; RUN; If netdrive.dataset
>> already
>> exists (it's being "updated" by work.dataset2), then this error
>> *might*
>> occur. If netdrive.dataset does not yet exist (it's being created by
>> work.dataset2), then problem will not occur.
>>
>> From SI Tech Support: They've seen this before (see SAS NOTE 005781,
>> link
>> here: http://support.sas.com/techsup/unotes/SN/005/005781.html ), but

>> can't fix it because (according to TS rep) once SAS wants to write to

>> NAS, they "hand it off" to the network. And that's when the problem
>> occurs.
>>
>> Here's what I think: When SAS updates a data set, it creates a
>> temporary data set to work on, keeping the original intact. When the
>> step ends, (think PROC SORT DATA=ND.dataset; RUN; (this killed me on
>> Saturday. Had a macro that sorted 20+ data sets, and lost 4!!! of
>> them.)) the original data set is over-written by the temp, taking on
>> the name of the original. And I'm thinking it's during that
>> writing/re-naming process that the storage system is losing our data
>> sets. (SI calls it a "timing issue"). Doesn't happen when working on
>> local drives, and, like I mentioned earlier, hasn't happened yet when

>> *creating* permanent data sets; only when updating.
>>
>> Some suggestions (from SITS): change engines (v8, v612) (doesn't
>> work, not feasible), use -SYNCHIO (have tried it; doesn't seem to
>> help), remove SAS data sets from on-line virus scanning in the NAS
>> (our IS dept is leery of that one). Personally, I'd like to go back
>> to previous storage (non-NAS, IS dept isn't thrilled with that one).
>>
>> Probably can get around this problem by programming like so:
>> DATA ABC;
>> SET ND.DATASET;
>> (play with data set ABC...)
>> RUN:
>>
>> (delete ND.DATASET)
>>
>> DATA ND.DATASET;
>> SET ABC;
>> RUN:
>>
>> But I'd prefer something cleaner, less intrusive (especially for our
>> less "sophisticated" users). Plus, we've got LOTS of programs that
>> are run daily, weekly, monthly, etc that contain steps like: "proc
>> sort data=ND.xxxx; run;"
>> and/or "data ND.xxxx; set ND.xxxx abc; run;" and/or (well, you get
>> the picture).
>>
>> To the point: has anyone else had this problem, and (if so) what did
>> you do to solve it?

************** IMPORTANT MESSAGE *****************************
This e-mail message is intended only for the addressee(s) and contains information which may be
confidential.
If you are not the intended recipient please advise the sender by return email, do not use or
disclose the contents, and delete the message and any attachments from your system. Unless
specifically indicated, this email does not constitute formal advice or commitment by the sender
or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries.
We can be contacted through our web site: commbank.com.au.
If you no longer wish to receive commercial electronic messages from us, please reply to this
e-mail by typing Unsubscribe in the subject line.
**************************************************************
From: Martin Gregory on
David,

you have probably considered and discarded this idea, but in case you
haven't: if you have enough space on the workstation, can you:

- copy the entire library to the workstation
- assign libref on workstation
- run your existing code
- if all went well, copy back after deleting files on NAS

cheers, Martin

On 01/25/2007 12:12 AM, Johnson, David wrote:
> Thank you Martin, I've suspected there may be hidden processes running
> which is why I am trying to pin down the instances. Unfortunately they
> are not predictable. Last night for instance 11:59:38 of processing
> went through without any incident which means I don't have a probative
> test of the resubmission changes I made. We'll see whether this holiday
> weekend produces anything different.
>
> A lot of clients now have outsourced IT infrastructure, and to manage
> the process the outsourcing company often places pervasive and hidden
> processes on the machines to monitor software, synchronise user data
> between local and remote drives and lock down settings. This prevents
> using some of the excellent tools that have been recommended for various
> issues, and may also mean any of a number of covert processes is
> conflicting with the batch.
>
> Nobody has said it yet; but this is a workstation, not a server, and
> batch processing should be done on the right platform, and if you get
> conflicts from running batches on a workstation then sometimes you just
> have to accept that or use the process to migrate the job to the
> platform for which it is suited. Unfortunately, some regulatory
> authorities might be unwilling to wait extra time for that migration to
> be completed for delivery of their information.
>
> Kind regards
>
> David
>
> -----Original Message-----
> From: SAS(r) Discussion [mailto:SAS-L(a)LISTSERV.UGA.EDU] On Behalf Of
> Martin Gregory
> Sent: Thursday, 25 January 2007 3:49 AM
> To: SAS-L(a)LISTSERV.UGA.EDU
> Subject: Re: ERROR: Rename... Losing data sets from network drives
>
> Possibly a long shot, but I once had a similar issue with files in the
> WORK library on a client PC. I don't recall the exact message, but SAS
> was not able to access file in WORK. It was also apparently random,
> sometimes it would be a dataset created by our application, sometimes
> one of the utility files created by SAS. It turned out that a network
> backup program had been scheduled to run every 15 minutes (!) and it was
> locking files in the WORK library while it was doing the back up.
>
> Is it possible that something similar is going on? Does this NAS keep
> snapshots? It might be doing some behind the scenes backing up in a not
> very intelligent way.
>
> -Martin
>
>>> Curtis Amick <curtis(a)SC.RR.COM> wrote:
>>> Got a difficult problem here. Recently my company upgraded network
>>> storage to an EMC NAS (Network Attached Storage), from a non-NAS
>>> system. Now, those of us who store SAS data sets on the network are
>>> encountering a serious problem. When updating data sets, sometimes
>>> (rarely) those data sets will be deleted. The error message looks
>>> like:
>>> "ERROR: Rename of temporary member for (data set name) failed. File
>>> may
>>> be found in a directory (your directory)"and the permanent data
>>> set is gone.
>>>
>>> This happens randomly, and (apparently) only when the data set
>>> already exists. That is, when doing like this:
>>>
>>> DATA NETDRIVE.DATASET; SET DATASET2; RUN; If netdrive.dataset
>>> already
>>> exists (it's being "updated" by work.dataset2), then this error
>>> *might*
>>> occur. If netdrive.dataset does not yet exist (it's being created by
>>> work.dataset2), then problem will not occur.
>>>
>>> From SI Tech Support: They've seen this before (see SAS NOTE 005781,
>>> link
>>> here: http://support.sas.com/techsup/unotes/SN/005/005781.html ), but
>
>>> can't fix it because (according to TS rep) once SAS wants to write to
>
>>> NAS, they "hand it off" to the network. And that's when the problem
>>> occurs.
>>>
>>> Here's what I think: When SAS updates a data set, it creates a
>>> temporary data set to work on, keeping the original intact. When the
>>> step ends, (think PROC SORT DATA=ND.dataset; RUN; (this killed me on
>>> Saturday. Had a macro that sorted 20+ data sets, and lost 4!!! of
>>> them.)) the original data set is over-written by the temp, taking on
>>> the name of the original. And I'm thinking it's during that
>>> writing/re-naming process that the storage system is losing our data
>>> sets. (SI calls it a "timing issue"). Doesn't happen when working on
>>> local drives, and, like I mentioned earlier, hasn't happened yet when
>
>>> *creating* permanent data sets; only when updating.
>>>
>>> Some suggestions (from SITS): change engines (v8, v612) (doesn't
>>> work, not feasible), use -SYNCHIO (have tried it; doesn't seem to
>>> help), remove SAS data sets from on-line virus scanning in the NAS
>>> (our IS dept is leery of that one). Personally, I'd like to go back
>>> to previous storage (non-NAS, IS dept isn't thrilled with that one).
>>>
>>> Probably can get around this problem by programming like so:
>>> DATA ABC;
>>> SET ND.DATASET;
>>> (play with data set ABC...)
>>> RUN:
>>>
>>> (delete ND.DATASET)
>>>
>>> DATA ND.DATASET;
>>> SET ABC;
>>> RUN:
>>>
>>> But I'd prefer something cleaner, less intrusive (especially for our
>>> less "sophisticated" users). Plus, we've got LOTS of programs that
>>> are run daily, weekly, monthly, etc that contain steps like: "proc
>>> sort data=ND.xxxx; run;"
>>> and/or "data ND.xxxx; set ND.xxxx abc; run;" and/or (well, you get
>>> the picture).
>>>
>>> To the point: has anyone else had this problem, and (if so) what did
>>> you do to solve it?
>
> ************** IMPORTANT MESSAGE *****************************
> This e-mail message is intended only for the addressee(s) and contains information which may be
> confidential.
> If you are not the intended recipient please advise the sender by return email, do not use or
> disclose the contents, and delete the message and any attachments from your system. Unless
> specifically indicated, this email does not constitute formal advice or commitment by the sender
> or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries.
> We can be contacted through our web site: commbank.com.au.
> If you no longer wish to receive commercial electronic messages from us, please reply to this
> e-mail by typing Unsubscribe in the subject line.
> **************************************************************