Showing posts with label memory. Show all posts
Showing posts with label memory. Show all posts

Wednesday, March 21, 2012

Problem getting SQL Server 2000 EE to use more than 4GB of memory

We are trying to get our SQL Server 2000 Enterprise Edition server to
use more than 4GB of memory on a machine with 8GB of physical memory
(running Windows 2003 Server Enterprise Edition).
I have done the following:
1. Updated Boot.ini to have the /3GB /PAE switch
2. set Max Server Memory to 7168MB using sp_configure
3. set Min Server Memory to 1024MB using sp_configure
4. set AWE enabled using sp_configure
5. configured user account that SQL Server runs under (SYSTEM) to have
rights for the "Lock Page in Memory" policy
When I look at "total server memory" for SQL Server under the
peformance viewer, it is using 4GB of memory. When I look at the task
manager, we have 2GB of physical memory free. I understand that with
AWE, the OS needs 1GB of memory to manage the extended memory. With
the /3GB switch, the OS will allocate 1GB to the kernel. So that makes
6GB of memory... Why are there 2GB of memory free, and why can't SQL
Server use it? I even added up the memory of all processes running in
the task manager, and it adds up to maybe 30MB.
Any ideas why we only get 4GB for SQL Server? We want to put 16GB of
RAM in this server, but until we can prove that SQL Server can
actually use it, we can't justify it. Is this 2GB of "free" RAM really
free, or allocated to something else such that SQL Server cannot use
it?Has your workload driven the SQL instance sufficiently hard on memory? Note
that until SQL Server is pressured to use more memory, it won't.
Linchi
"advisortechnical" wrote:
> We are trying to get our SQL Server 2000 Enterprise Edition server to
> use more than 4GB of memory on a machine with 8GB of physical memory
> (running Windows 2003 Server Enterprise Edition).
> I have done the following:
> 1. Updated Boot.ini to have the /3GB /PAE switch
> 2. set Max Server Memory to 7168MB using sp_configure
> 3. set Min Server Memory to 1024MB using sp_configure
> 4. set AWE enabled using sp_configure
> 5. configured user account that SQL Server runs under (SYSTEM) to have
> rights for the "Lock Page in Memory" policy
> When I look at "total server memory" for SQL Server under the
> peformance viewer, it is using 4GB of memory. When I look at the task
> manager, we have 2GB of physical memory free. I understand that with
> AWE, the OS needs 1GB of memory to manage the extended memory. With
> the /3GB switch, the OS will allocate 1GB to the kernel. So that makes
> 6GB of memory... Why are there 2GB of memory free, and why can't SQL
> Server use it? I even added up the memory of all processes running in
> the task manager, and it adds up to maybe 30MB.
> Any ideas why we only get 4GB for SQL Server? We want to put 16GB of
> RAM in this server, but until we can prove that SQL Server can
> actually use it, we can't justify it. Is this 2GB of "free" RAM really
> free, or allocated to something else such that SQL Server cannot use
> it?
>|||On Apr 24, 8:28=A0am, Linchi Shea <LinchiS...@.discussions.microsoft.com>
wrote:
> Has your workload driven the SQL instance sufficiently hard on memory? Not=e
> that until SQL Server is pressured to use more memory, it won't.
> Linchi
>
> "advisortechnical" wrote:
> > We are trying to get our SQL Server 2000 Enterprise Edition server to
> > use more than 4GB of memory on a machine with 8GB of physical memory
> > (running Windows 2003 Server Enterprise Edition).
> > I have done the following:
> > 1. Updated Boot.ini to have the /3GB /PAE switch
> > 2. set Max Server Memory to 7168MB using sp_configure
> > 3. set Min Server Memory to 1024MB using sp_configure
> > 4. set AWE enabled using sp_configure
> > 5. configured user account that SQL Server runs under (SYSTEM) to have
> > rights for the "Lock Page in Memory" policy
> > When I look at "total server memory" for SQL Server under the
> > peformance viewer, it is using 4GB of memory. When I look at the task
> > manager, we have 2GB of physical memory free. I understand that with
> > AWE, the OS needs 1GB of memory to manage the extended memory. With
> > the /3GB switch, the OS will allocate 1GB to the kernel. So that makes
> > 6GB of memory... Why are there 2GB of memory free, and why can't SQL
> > Server use it? I even added up the memory of all processes running in
> > the task manager, and it adds up to maybe 30MB.
> > Any ideas why we only get 4GB for SQL Server? We want to put 16GB of
> > RAM in this server, but until we can prove that SQL Server can
> > actually use it, we can't justify it. Is this 2GB of "free" RAM really
> > free, or allocated to something else such that SQL Server cannot use
> > it... Hide quoted text -
> - Show quoted text -
It seems you have done all the things what is required to configure
SQL AWE. You can also stress the server by genearting some test data
on the same.
Please verify if you have SP4 and AWE patch is applied on above.
Thanks
Ajay Rengunthwar
MCTS,MCDBA,MCAD|||On Apr 24, 8:12=A0pm, Ajay Rengunthwar <aju...@.gmail.com> wrote:
> On Apr 24, 8:28=A0am, Linchi Shea <LinchiS...@.discussions.microsoft.com>
> wrote:
>
>
> > Has your workload driven the SQL instance sufficiently hard on memory? N=ote
> > that until SQL Server is pressured to use more memory, it won't.
> > Linchi
> > "advisortechnical" wrote:
> > > We are trying to get our SQL Server 2000 Enterprise Edition server to
> > > use more than 4GB of memory on a machine with 8GB of physical memory
> > > (running Windows 2003 Server Enterprise Edition).
> > > I have done the following:
> > > 1. Updated Boot.ini to have the /3GB /PAE switch
> > > 2. set Max Server Memory to 7168MB using sp_configure
> > > 3. set Min Server Memory to 1024MB using sp_configure
> > > 4. set AWE enabled using sp_configure
> > > 5. configured user account that SQL Server runs under (SYSTEM) to have=
> > > rights for the "Lock Page in Memory" policy
> > > When I look at "total server memory" for SQL Server under the
> > > peformance viewer, it is using 4GB of memory. When I look at the task
> > > manager, we have 2GB of physical memory free. I understand that with
> > > AWE, the OS needs 1GB of memory to manage the extended memory. With
> > > the /3GB switch, the OS will allocate 1GB to the kernel. So that makes=
> > > 6GB of memory... Why are there 2GB of memory free, and why can't SQL
> > > Server use it? I even added up the memory of all processes running in
> > > the task manager, and it adds up to maybe 30MB.
> > > Any ideas why we only get 4GB for SQL Server? We want to put 16GB of
> > > RAM in this server, but until we can prove that SQL Server can
> > > actually use it, we can't justify it. Is this 2GB of "free" RAM really=
> > > free, or allocated to something else such that SQL Server cannot use
> > > it... Hide quoted text -
> > - Show quoted text -
> It seems you have done all the things what is required to configure
> SQL AWE. You can also stress the server by genearting some test data
> on the same.
> Please verify if you have SP4 and AWE patch is applied on above.
> Thanks
> Ajay Rengunthwar
> MCTS,MCDBA,MCAD- Hide quoted text -
> - Show quoted text -
Thanks for the advice. I ran a stress test by opening several query
analyzer windows and running a select * query on a table with millions
of rows. I found that sql server->memory manger->target server memory
and total server memory stayed fixed at 4164408 (3.97 GB), however the
amount of available physical memory in the windows task manager had
shrunk to 1 GB from 2 GB (and it is continuing to shink as the test is
still running).
So I think on a machine with 8GB, with AWE and PAE enabled, SQL Server
can only take 3.97 GB of memory, because a certain amount of memory
needs to be free for Windows, even though it is "free" memory and the
kernel has 1GB of memory. This free memory is decreasing as the stress
test is running.
I am wondering if I add more RAM to this machine, will SQL Server will
be able to use it?|||On Apr 25, 4:08=A0pm, advisortechnical <chand.bel...@.caremark.com>
wrote:
> On Apr 24, 8:12=A0pm, Ajay Rengunthwar <aju...@.gmail.com> wrote:
>
>
> > On Apr 24, 8:28=A0am, Linchi Shea <LinchiS...@.discussions.microsoft.com>=
> > wrote:
> > > Has your workload driven the SQL instance sufficiently hard on memory?= Note
> > > that until SQL Server is pressured to use more memory, it won't.
> > > Linchi
> > > "advisortechnical" wrote:
> > > > We are trying to get our SQL Server 2000 Enterprise Edition server t=o
> > > > use more than 4GB of memory on a machine with 8GB of physical memory=
> > > > (running Windows 2003 Server Enterprise Edition).
> > > > I have done the following:
> > > > 1. Updated Boot.ini to have the /3GB /PAE switch
> > > > 2. set Max Server Memory to 7168MB using sp_configure
> > > > 3. set Min Server Memory to 1024MB using sp_configure
> > > > 4. set AWE enabled using sp_configure
> > > > 5. configured user account that SQL Server runs under (SYSTEM) to ha=ve
> > > > rights for the "Lock Page in Memory" policy
> > > > When I look at "total server memory" for SQL Server under the
> > > > peformance viewer, it is using 4GB of memory. When I look at the tas=k
> > > > manager, we have 2GB of physical memory free. I understand that with=
> > > > AWE, the OS needs 1GB of memory to manage the extended memory. With
> > > > the /3GB switch, the OS will allocate 1GB to the kernel. So that mak=es
> > > > 6GB of memory... Why are there 2GB of memory free, and why can't SQL=
> > > > Server use it? I even added up the memory of all processes running i=n
> > > > the task manager, and it adds up to maybe 30MB.
> > > > Any ideas why we only get 4GB for SQL Server? We want to put 16GB of=
> > > > RAM in this server, but until we can prove that SQL Server can
> > > > actually use it, we can't justify it. Is this 2GB of "free" RAM real=ly
> > > > free, or allocated to something else such that SQL Server cannot use=
> > > > it... Hide quoted text -
> > > - Show quoted text -
> > It seems you have done all the things what is required to configure
> > SQL AWE. You can also stress the server by genearting some test data
> > on the same.
> > Please verify if you have SP4 and AWE patch is applied on above.
> > Thanks
> > Ajay Rengunthwar
> > MCTS,MCDBA,MCAD- Hide quoted text -
> > - Show quoted text -
> Thanks for the advice. I ran a stress test by opening several query
> analyzer windows and running a select * query on a table with millions
> of rows. I found that sql server->memory manger->target server memory
> and total server memory stayed fixed at 4164408 (3.97 GB), however the
> amount of available physical memory in the windows task manager had
> shrunk to 1 GB from 2 GB (and it is continuing to shink as the test is
> still running).
> So I think on a machine with 8GB, with AWE and PAE enabled, SQL Server
> can only take 3.97 GB of memory, because a certain amount of memory
> needs to be free for Windows, even though it is "free" memory and the
> kernel has 1GB of memory. This free memory is decreasing as the stress
> test is running.
> I am wondering if I add more RAM to this machine, will SQL Server will
> be able to use it... Hide quoted text -
> - Show quoted text -
I think it is the system cache that needs this memory. When I run a
stress test, the amount of available memory decreases and the system
cache increases. It is confusing -- why would it say there are 2GB of
free memory, if it was allocated to the system cache? Of course, some
memory must be given to the system cache, otherwise these huge queries
couldn't do disk i/o efficiently. I changed one of the registry
settings to allow a large system cache, however I saw no change in SQL
Server memory utilization.sql

Friday, March 9, 2012

problem deleting large no. of records

Hi all,
I have a table with 6 million rows which takes up about 2GB of memory on
hard disk. So we have decided to clean this table up. We have decided to
delete all records that have syncstamp and logstamp field values less than
the value correspoing '20040131'. This will probably delete 5.5 million rows
out of total 6 million.
When I try to delete records using following script, it is very slow. The
script did not finish executing in three hours. So we had to cancel the
execution of the script. Also the users were not able to use conttlog table
when this query was executing although I am using ROWLOCK table hint.
Is there any other way to fix the speed and concurrency issues with this
script? I know I can't use a loop to delete 5.5 million rows because it will
probably take days to execute it.
Thanks in advance.
-- ****************************************
*******
-- Variable declaration
-- ****************************************
*******
DECLARE @.Date datetime,
@.syncstamp varchar(7)
-- ****************************************
*******
-- Assign variable values
-- ****************************************
*******
SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
-- ****************************************
*******
-- Delete conttlog records
-- ****************************************
*******
SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
DELETE
FROM conttlog with(rowlock)
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstampsql
Do you have Primary key on the table?
I'd try to divide the 'big' transaction/deletion into small ones
See this example
SET ROWCOUNT 1000 --Set the value
WHILE 1 = 1
BEGIN
UPDATE MyTable WHERE col <= datetimecolumn
IF @.@.ROWCOUNT = 0
BEGIN
BREAK
END
ELSE
BEGIN
CHECKPOINT
END
END
SET ROWCOUNT 0
"sql" <donotspam@.nospaml.com> wrote in message
news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory
on
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million
rows
> out of total 6 million.
> When I try to delete records using following script, it is very slow.
The
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog
table
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with
this
> script? I know I can't use a loop to delete 5.5 million rows because it
will
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>|||I would recommend doing this in smaller batches -- maybe 10,000 rows at
once:
-- ****************************************
*******
-- Variable declaration
-- ****************************************
*******
DECLARE @.Date datetime,
@.syncstamp varchar(7)
-- ****************************************
*******
-- Assign variable values
-- ****************************************
*******
SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
-- ****************************************
*******
-- Delete conttlog records
-- ****************************************
*******
SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
SET ROWCOUNT 10000
DELETE
FROM conttlog
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstamp
WHILE @.@.ROWCOUNT > 0
BEGIN
DELETE
FROM conttlog
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstamp
END
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--
"sql" <donotspam@.nospaml.com> wrote in message
news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory
on
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million
rows
> out of total 6 million.
> When I try to delete records using following script, it is very slow.
The
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog
table
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with
this
> script? I know I can't use a loop to delete 5.5 million rows because it
will
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>|||Hi
Do it in smaller batches. This will help with performance.
SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
SET ROWCOUNT 10000
DELETE
FROM conttlog with(rowlock)
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstamp
Regards
Mike
"sql" wrote:

> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory o
n
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million ro
ws
> out of total 6 million.
> When I try to delete records using following script, it is very slow. Th
e
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog tabl
e
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with th
is
> script? I know I can't use a loop to delete 5.5 million rows because it wi
ll
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>
>|||Uri,
One comment on this; I would personally be wary of doing that many
checkpoints during the process, as I believe that it would bring overall
system performance down quite a bit due to the constant disk activity. Is
there a reason you'd recommend doing a checkpoint on every iteration?
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:OvmficbGFHA.3608@.TK2MSFTNGP14.phx.gbl...
> sql
> Do you have Primary key on the table?
> I'd try to divide the 'big' transaction/deletion into small ones
> See this example
> SET ROWCOUNT 1000 --Set the value
> WHILE 1 = 1
> BEGIN
> UPDATE MyTable WHERE col <= datetimecolumn
> IF @.@.ROWCOUNT = 0
> BEGIN
> BREAK
> END
> ELSE
> BEGIN
> CHECKPOINT
> END
> END
> SET ROWCOUNT 0
>|||Thank you all for your answers. I will set the ROWCOUNT at the beginning of
the script to either 1000 or 10000. Do you think it is worth creating a
clusterd index on SYNCSTAMP and LOGSTAMP to improve the performance of this
script and then drop it. If so how long do you think it will take to create
such an index. both fields are varchar(7).
Thanks.
"sql" <donotspam@.nospaml.com> wrote in message
news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory on
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million
> rows out of total 6 million.
> When I try to delete records using following script, it is very slow. The
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog
> table when this query was executing although I am using ROWLOCK table
> hint.
> Is there any other way to fix the speed and concurrency issues with
> this script? I know I can't use a loop to delete 5.5 million rows because
> it will probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>|||Hi
Creating a clustetred index will cause the whole table to be re-written.
This will take longer to do than running the delete.
Regards
Mike
"sql" wrote:

> Thank you all for your answers. I will set the ROWCOUNT at the beginning o
f
> the script to either 1000 or 10000. Do you think it is worth creating a
> clusterd index on SYNCSTAMP and LOGSTAMP to improve the performance of thi
s
> script and then drop it. If so how long do you think it will take to creat
e
> such an index. both fields are varchar(7).
> Thanks.
> "sql" <donotspam@.nospaml.com> wrote in message
> news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
>
>|||Hi
My Sfr 0.02
It is in an implicit transaction, so until the whole batch completes,
nothing is committed.
Regards
Mike
"Adam Machanic" wrote:

> Uri,
> One comment on this; I would personally be wary of doing that many
> checkpoints during the process, as I believe that it would bring overall
> system performance down quite a bit due to the constant disk activity. Is
> there a reason you'd recommend doing a checkpoint on every iteration?
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.sqljunkies.com/weblog/amachanic
> --
>
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:OvmficbGFHA.3608@.TK2MSFTNGP14.phx.gbl...
>
>|||sql wrote:
> Thank you all for your answers. I will set the ROWCOUNT at the
> beginning of the script to either 1000 or 10000. Do you think it is
> worth creating a clusterd index on SYNCSTAMP and LOGSTAMP to improve
> the performance of this script and then drop it. If so how long do
> you think it will take to create such an index. both fields are
> varchar(7).
If you do this, understand that SQL Server will have to rebuild all
non-clustered indexes. So do it off-hours, if at all. I would recommend
you not mess with the indexes since you are working on live data and
use the small batch size.
Is there any way you can do all this testing on a dev/test server?
David Gugick
Imceda Software
www.imceda.com|||What I've done in the past is stuff the records I wish to retain into a new
table and do a rename. The table will need to be offline during this
operation, simply run a test to see how long it takes to populate the new
table with the half million rows. The rename takes less than a second. If yo
u
try this technique I'd recommend doing a create table instead of a select
into so you can double check the referential integrity, default values,
index(s) and all else.
Dan
"sql" wrote:

> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory o
n
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million ro
ws
> out of total 6 million.
> When I try to delete records using following script, it is very slow. Th
e
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog tabl
e
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with th
is
> script? I know I can't use a loop to delete 5.5 million rows because it wi
ll
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>
>