These days, we are monitoring this issue:when one was developing a utility that monitors log files as they are updated.On 2003, opening the log file folder in explorer, you can see the timestamp and files size change before your eyes each time the log is updated.On 2008, "Last Modified" field on log files is not updated unless another program attempts to open the file or the utility is stopped, even if F5 is pressed to refresh the view.Explorer gets is information from NTFS, by using a cmd prompt and "dir" we found that the NTFS metadata for the files is not updated until the handle to a file is closed.Refreshing the information of a FOLDER is just going to go to the (memory resident) metadata cached by NTFS, but querying the file explicitly will force disk I/O to get the properties - this was a design change introduced in Vista to reduce unnecessary disk I/O to improve performanceThere are some exceptions to this rule:- in some, but not all, cases a simple "dir filename" is enough to refresh the metadata- "special" folders may be treated differently, such as user profiles where we do not expect a large number of files and want to be able to rely on the file data presented- kernel filter drivers may change the behaviour as by design they "add, remove or change functionality of other drivers"As the workaround is for any process to open and close a handle to the log files, a tool was written to do exactly that, plus get the file information, using the following APIs:CreateFileGetFileInformationByHandleCloseHandleReference:http://social.technet.microsoft.com/Forums/en-US/winservergen/Thread/2B8BACA2-9C1B-4D80-80ED-87A3D6B1336F
Any update on this problem? Any way for us to indicate to the operating system that there are other "special folders" that should be updated normally, like you indicate the user profile folders are?
We are experiencing this same thing and it's a major problem for us, since our IIS Logfiles constantly show "zero bytes" in size, even though they are in fact filled with data. Thus our log parsing utilities (Urchin 7 in this case) does not read the logfiles in, because it thinks they are empty...
We have also implemented the hack that you mentioned, to have a program open the file up, thus causing the file size to increase one time so that Urchin will import the data. However, we would like Microsoft to offer a better solution that doesn't require a workaround.
We have the same problem. Is there any solution to force Windows to report the correct modified timestamp even if it means more disk I/O?
Has anyone addressed this issue?
The registry hack "Last Access Timestamp" does not work.
I have an automation system that utilized both windows 7 and Server 2008R2. The systems write daily log files. These files are not updated unless a user directly opens the file. (At which time the OS throws an error saying the file is already open and you can see the file size grow)
Meaning that a batch script (using x-copy) designed to copy new logs to the appropriate directories will copy zer0 byte files.
If there is not a direct fix is there a work around to get a certain directory to update the files in a directory with command line?
I run this before my log parser kicks off.. it will do a directory listing of each file from today and that causes the file size to update.. not necessarily the modified date, though.
for %i in (1 2 3 4) do for /f %G in ('"dir /b \\web%i\iislog\u_ex%date:~12,2%%date:~4,2%%date:~7,2%.log /s"') do dir %G
any solution to this? we are seeing exact same issue.
1. Right click the "Date" button field on top where the wrong date is showing. (not the file but the header for that column)
2. then choose "date modified" then right click again and uncheck "date"
The "date" you are looking at is the for "date taken" or "created" change it to "date modified" and you should be good
worked for me
Still looking for a solution to this issue - our backup software monitors the filesize of the target file and assumes if the size has not increased for 15 minutes that something has gone wrong, needless to say that with large files on a slow link our backups fail alot!
Looking for a script I can run on the server on a 5 minute schedule that will update the metadata or perhaps a Registry key to modify this behavior globally on a server.
Any help would be appreciated.
This is absolutely crazy. The 'Last Modified' timestamp property is used in lots of systems and is now no longer reliable. Trying to justify it as a reduction in unnecessary disk I/O is a joke. It is necessary! Other processes often need to know the last time a file was actually modified.
In our scenario we are pulling IIS log files from a server via FTP. We don't want to pull the currently active log file as it is incomplete. The natural solution is to check the 'Last Modified' timestamp (the 'Created' timestamp is not available over FTP). But IIS keeps its active log file open right till it rolls over and creates a new active log. This results in two log files having exactly the same 'Last Modified' timestamp - the active one and the previous one.