Showing posts with label disk. Show all posts
Showing posts with label disk. Show all posts

Friday, March 23, 2012

Problem in back-up database

Hi, i have one database question here. Let me describe my situation first:

i have two different hard disk drives, here i call it HDD1 and HDD2. i want to format my HDD1, but all my database tables are stored in HDD1. i want to know is there any way for me to backup my database table to HDD2, and restore it back later to HDD1.

thanks to you all.You can either do a back up and restore or detach the database files, move them, and then re-attach them. There's on option when restoring backups to move the data files. Detaching the database basically closes the files and removes them from the SQL server. Re-attaching them brings them back to 'life'. Either would work. Check BOL|||I already used the wizard in sql server to backup my file, the back-up file ends with .bak extension, right? But the problem is i don't know how to restore it back. I have tried using "Restore" function from sql server, but it does not work.|||The .bak file is the correct file. Dig into books on line about restoring from a file. Pretty hard to diagnose via email, but unless your SQL installation is somehow corrupted, it should be able to restore a backup file.

Friday, March 9, 2012

problem deleting large no. of records

Hi all,
I have a table with 6 million rows which takes up about 2GB of memory on
hard disk. So we have decided to clean this table up. We have decided to
delete all records that have syncstamp and logstamp field values less than
the value correspoing '20040131'. This will probably delete 5.5 million rows
out of total 6 million.
When I try to delete records using following script, it is very slow. The
script did not finish executing in three hours. So we had to cancel the
execution of the script. Also the users were not able to use conttlog table
when this query was executing although I am using ROWLOCK table hint.
Is there any other way to fix the speed and concurrency issues with this
script? I know I can't use a loop to delete 5.5 million rows because it will
probably take days to execute it.
Thanks in advance.
-- ****************************************
*******
-- Variable declaration
-- ****************************************
*******
DECLARE @.Date datetime,
@.syncstamp varchar(7)
-- ****************************************
*******
-- Assign variable values
-- ****************************************
*******
SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
-- ****************************************
*******
-- Delete conttlog records
-- ****************************************
*******
SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
DELETE
FROM conttlog with(rowlock)
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstampsql
Do you have Primary key on the table?
I'd try to divide the 'big' transaction/deletion into small ones
See this example
SET ROWCOUNT 1000 --Set the value
WHILE 1 = 1
BEGIN
UPDATE MyTable WHERE col <= datetimecolumn
IF @.@.ROWCOUNT = 0
BEGIN
BREAK
END
ELSE
BEGIN
CHECKPOINT
END
END
SET ROWCOUNT 0
"sql" <donotspam@.nospaml.com> wrote in message
news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory
on
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million
rows
> out of total 6 million.
> When I try to delete records using following script, it is very slow.
The
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog
table
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with
this
> script? I know I can't use a loop to delete 5.5 million rows because it
will
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>|||I would recommend doing this in smaller batches -- maybe 10,000 rows at
once:
-- ****************************************
*******
-- Variable declaration
-- ****************************************
*******
DECLARE @.Date datetime,
@.syncstamp varchar(7)
-- ****************************************
*******
-- Assign variable values
-- ****************************************
*******
SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
-- ****************************************
*******
-- Delete conttlog records
-- ****************************************
*******
SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
SET ROWCOUNT 10000
DELETE
FROM conttlog
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstamp
WHILE @.@.ROWCOUNT > 0
BEGIN
DELETE
FROM conttlog
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstamp
END
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--
"sql" <donotspam@.nospaml.com> wrote in message
news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory
on
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million
rows
> out of total 6 million.
> When I try to delete records using following script, it is very slow.
The
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog
table
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with
this
> script? I know I can't use a loop to delete 5.5 million rows because it
will
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>|||Hi
Do it in smaller batches. This will help with performance.
SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
SET ROWCOUNT 10000
DELETE
FROM conttlog with(rowlock)
WHERE syncstamp < @.syncstamp
AND logstamp < @.syncstamp
Regards
Mike
"sql" wrote:

> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory o
n
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million ro
ws
> out of total 6 million.
> When I try to delete records using following script, it is very slow. Th
e
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog tabl
e
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with th
is
> script? I know I can't use a loop to delete 5.5 million rows because it wi
ll
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>
>|||Uri,
One comment on this; I would personally be wary of doing that many
checkpoints during the process, as I believe that it would bring overall
system performance down quite a bit due to the constant disk activity. Is
there a reason you'd recommend doing a checkpoint on every iteration?
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:OvmficbGFHA.3608@.TK2MSFTNGP14.phx.gbl...
> sql
> Do you have Primary key on the table?
> I'd try to divide the 'big' transaction/deletion into small ones
> See this example
> SET ROWCOUNT 1000 --Set the value
> WHILE 1 = 1
> BEGIN
> UPDATE MyTable WHERE col <= datetimecolumn
> IF @.@.ROWCOUNT = 0
> BEGIN
> BREAK
> END
> ELSE
> BEGIN
> CHECKPOINT
> END
> END
> SET ROWCOUNT 0
>|||Thank you all for your answers. I will set the ROWCOUNT at the beginning of
the script to either 1000 or 10000. Do you think it is worth creating a
clusterd index on SYNCSTAMP and LOGSTAMP to improve the performance of this
script and then drop it. If so how long do you think it will take to create
such an index. both fields are varchar(7).
Thanks.
"sql" <donotspam@.nospaml.com> wrote in message
news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory on
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million
> rows out of total 6 million.
> When I try to delete records using following script, it is very slow. The
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog
> table when this query was executing although I am using ROWLOCK table
> hint.
> Is there any other way to fix the speed and concurrency issues with
> this script? I know I can't use a loop to delete 5.5 million rows because
> it will probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>|||Hi
Creating a clustetred index will cause the whole table to be re-written.
This will take longer to do than running the delete.
Regards
Mike
"sql" wrote:

> Thank you all for your answers. I will set the ROWCOUNT at the beginning o
f
> the script to either 1000 or 10000. Do you think it is worth creating a
> clusterd index on SYNCSTAMP and LOGSTAMP to improve the performance of thi
s
> script and then drop it. If so how long do you think it will take to creat
e
> such an index. both fields are varchar(7).
> Thanks.
> "sql" <donotspam@.nospaml.com> wrote in message
> news:uUV2eXbGFHA.3284@.TK2MSFTNGP10.phx.gbl...
>
>|||Hi
My Sfr 0.02
It is in an implicit transaction, so until the whole batch completes,
nothing is committed.
Regards
Mike
"Adam Machanic" wrote:

> Uri,
> One comment on this; I would personally be wary of doing that many
> checkpoints during the process, as I believe that it would bring overall
> system performance down quite a bit due to the constant disk activity. Is
> there a reason you'd recommend doing a checkpoint on every iteration?
>
> --
> Adam Machanic
> SQL Server MVP
> http://www.sqljunkies.com/weblog/amachanic
> --
>
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:OvmficbGFHA.3608@.TK2MSFTNGP14.phx.gbl...
>
>|||sql wrote:
> Thank you all for your answers. I will set the ROWCOUNT at the
> beginning of the script to either 1000 or 10000. Do you think it is
> worth creating a clusterd index on SYNCSTAMP and LOGSTAMP to improve
> the performance of this script and then drop it. If so how long do
> you think it will take to create such an index. both fields are
> varchar(7).
If you do this, understand that SQL Server will have to rebuild all
non-clustered indexes. So do it off-hours, if at all. I would recommend
you not mess with the indexes since you are working on live data and
use the small batch size.
Is there any way you can do all this testing on a dev/test server?
David Gugick
Imceda Software
www.imceda.com|||What I've done in the past is stuff the records I wish to retain into a new
table and do a rename. The table will need to be offline during this
operation, simply run a test to see how long it takes to populate the new
table with the half million rows. The rename takes less than a second. If yo
u
try this technique I'd recommend doing a create table instead of a select
into so you can double check the referential integrity, default values,
index(s) and all else.
Dan
"sql" wrote:

> Hi all,
> I have a table with 6 million rows which takes up about 2GB of memory o
n
> hard disk. So we have decided to clean this table up. We have decided to
> delete all records that have syncstamp and logstamp field values less than
> the value correspoing '20040131'. This will probably delete 5.5 million ro
ws
> out of total 6 million.
> When I try to delete records using following script, it is very slow. Th
e
> script did not finish executing in three hours. So we had to cancel the
> execution of the script. Also the users were not able to use conttlog tabl
e
> when this query was executing although I am using ROWLOCK table hint.
> Is there any other way to fix the speed and concurrency issues with th
is
> script? I know I can't use a loop to delete 5.5 million rows because it wi
ll
> probably take days to execute it.
> Thanks in advance.
> -- ****************************************
*******
> -- Variable declaration
> -- ****************************************
*******
> DECLARE @.Date datetime,
> @.syncstamp varchar(7)
> -- ****************************************
*******
> -- Assign variable values
> -- ****************************************
*******
> SET @.Date = '20040131' -- yyyymmdd -> purge logs upto this date
> -- ****************************************
*******
> -- Delete conttlog records
> -- ****************************************
*******
> SET @.syncstamp = dbo.WF_GetSyncStamp(@.Date)
> DELETE
> FROM conttlog with(rowlock)
> WHERE syncstamp < @.syncstamp
> AND logstamp < @.syncstamp
>
>