I have a large number of rows (~ 200 million) and I want to process these values in C # after reading I have to process them from memory to column values by grouping the entries in a way that can not be done within the SQL server. The problem is that reading through the whole data gives me an out-of-memory exception , And partially It takes too long to execute.
So I want to break my query into small pieces, one method is to make an independent selection and then use where in the section. Another method I have been suggested is to use the SQL cursor, I want to choose one of these methods (or if possible, any other), especially in relation to the following points:
- What will be the performance of plans on the server? Which will perform?
- Can I parallel the SQL cursor questions safely? If I can parallel the first plan, will I get an execution benefit (where in a block)?
- How many objects can I specify in that section? Is this limited only by the size of the query string?
Any other suggestions are also welcome.
EDIT 1: I have been given various solutions, but I still want to know the answers to my basic questions (out of curiosity).
If you have to do logic arguments in the code, you argue the SQL Server as a managed stored procedure You can try to write in that which can be used in the groping query.
Check this out to your customer before returning the dataset to the group will allow.
[edit - using dictionaries in relation to your comments]
You can view my project on which a disk is running Dictionary & lt; T, V> This memory will stop outside the exception. It would be interesting to see how it performs for your scenario. (If you are on 32 bit system, read the note on the introduction page).
Comments
Post a Comment